I0725 10:31:11.529950 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0725 10:31:11.530222 7 e2e.go:124] Starting e2e run "1d0a527c-4d86-45e0-a0d9-150e97f4c9a7" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595673070 - Will randomize all specs Will run 275 of 4992 specs Jul 25 10:31:11.588: INFO: >>> kubeConfig: /root/.kube/config Jul 25 10:31:11.591: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 25 10:31:11.614: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 25 10:31:11.646: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 25 10:31:11.646: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 25 10:31:11.646: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 25 10:31:11.652: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 25 10:31:11.652: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 25 10:31:11.652: INFO: e2e test version: v1.18.5 Jul 25 10:31:11.654: INFO: kube-apiserver version: v1.18.4 Jul 25 10:31:11.654: INFO: >>> kubeConfig: /root/.kube/config Jul 25 10:31:11.658: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 25 10:31:11.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services Jul 25 10:31:11.756: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-2935 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2935 to expose endpoints map[] Jul 25 10:31:11.839: INFO: Get endpoints failed (63.96694ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 25 10:31:12.842: INFO: successfully validated that service endpoint-test2 in namespace services-2935 exposes endpoints map[] (1.067349795s elapsed) STEP: Creating pod pod1 in namespace services-2935 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2935 to expose endpoints map[pod1:[80]] Jul 25 10:31:15.995: INFO: successfully validated that service endpoint-test2 in namespace services-2935 exposes endpoints map[pod1:[80]] (3.144735982s elapsed) STEP: Creating pod pod2 in namespace services-2935 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2935 to expose endpoints map[pod1:[80] pod2:[80]] Jul 25 10:31:20.171: INFO: successfully validated that service endpoint-test2 in namespace services-2935 exposes endpoints map[pod1:[80] pod2:[80]] (4.171340228s elapsed) STEP: Deleting pod pod1 in namespace services-2935 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2935 to expose endpoints map[pod2:[80]] Jul 25 10:31:21.269: INFO: successfully validated that service endpoint-test2 in namespace services-2935 exposes endpoints map[pod2:[80]] (1.093617943s elapsed) STEP: Deleting pod pod2 in namespace services-2935 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2935 to expose endpoints map[] Jul 25 10:31:22.366: INFO: successfully validated that service endpoint-test2 in namespace services-2935 exposes endpoints map[] (1.092345756s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 25 10:31:22.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2935" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.877 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":1,"skipped":8,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 25 10:31:22.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Jul 25 10:31:22.597: INFO: PodSpec: initContainers in spec.initContainers Jul 25 10:32:11.823: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bebe4f06-8241-41f6-8654-0a63612223ba", GenerateName:"", Namespace:"init-container-3545", SelfLink:"/api/v1/namespaces/init-container-3545/pods/pod-init-bebe4f06-8241-41f6-8654-0a63612223ba", UID:"9eba5087-d2fe-48f4-9322-977dc2be52d4", ResourceVersion:"4013064", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731269882, loc:(*time.Location)(0x7b220e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"597803194"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b2fdc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b2fde0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b2fe00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b2fe20)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xdcnx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d80c40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdcnx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdcnx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdcnx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002affbf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00295a770), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002affc80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002affca0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002affca8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002affcac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731269882, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731269882, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731269882, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731269882, loc:(*time.Location)(0x7b220e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.110", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.110"}}, StartTime:(*v1.Time)(0xc002b2fe40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00295a850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00295a8c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0347a4cb31e6d61e0e051667cbc98105709e869fb86dc2e8713660416ef89088", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b2fe80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b2fe60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002affd2f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 25 10:32:11.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3545" for this suite. • [SLOW TEST:49.396 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":2,"skipped":17,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 25 10:32:11.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Jul 25 10:32:17.281: INFO: Successfully updated pod "labelsupdate54369526-2f66-47d8-8c49-9e928f987464" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 25 10:32:21.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7519" for this suite. • [SLOW TEST:9.415 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":30,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 25 10:32:21.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2117 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2117 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2117 Jul 25 10:32:21.485: INFO: Found 0 stateful pods, waiting for 1 Jul 25 10:32:31.489: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 25 10:32:31.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2117 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 25 10:32:35.780: INFO: stderr: "I0725 10:32:35.630480 30 log.go:172] (0xc00068adc0) (0xc0007e9680) Create stream\nI0725 10:32:35.630569 30 log.go:172] (0xc00068adc0) (0xc0007e9680) Stream added, broadcasting: 1\nI0725 10:32:35.633597 30 log.go:172] (0xc00068adc0) Reply frame received for 1\nI0725 10:32:35.633636 30 log.go:172] (0xc00068adc0) (0xc000b74000) Create stream\nI0725 10:32:35.633648 30 log.go:172] (0xc00068adc0) (0xc000b74000) Stream added, broadcasting: 3\nI0725 10:32:35.634738 30 log.go:172] (0xc00068adc0) Reply frame received for 3\nI0725 10:32:35.634798 30 log.go:172] (0xc00068adc0) (0xc000546000) Create stream\nI0725 10:32:35.634817 30 log.go:172] (0xc00068adc0) (0xc000546000) Stream added, broadcasting: 5\nI0725 10:32:35.635889 30 log.go:172] (0xc00068adc0) Reply frame received for 5\nI0725 10:32:35.725541 30 log.go:172] (0xc00068adc0) Data frame received for 5\nI0725 10:32:35.725574 30 log.go:172] (0xc000546000) (5) Data frame handling\nI0725 10:32:35.725594 30 log.go:172] (0xc000546000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 10:32:35.772093 30 log.go:172] (0xc00068adc0) Data frame received for 5\nI0725 10:32:35.772139 30 log.go:172] (0xc000546000) (5) Data frame handling\nI0725 10:32:35.772184 30 log.go:172] (0xc00068adc0) Data frame received for 3\nI0725 10:32:35.772202 30 log.go:172] (0xc000b74000) (3) Data frame handling\nI0725 10:32:35.772220 30 log.go:172] (0xc000b74000) (3) Data frame sent\nI0725 10:32:35.772294 30 log.go:172] (0xc00068adc0) Data frame received for 3\nI0725 10:32:35.772340 30 log.go:172] (0xc000b74000) (3) Data frame handling\nI0725 10:32:35.774839 30 log.go:172] (0xc00068adc0) Data frame received for 1\nI0725 10:32:35.774879 30 log.go:172] (0xc0007e9680) (1) Data frame handling\nI0725 10:32:35.774910 30 log.go:172] (0xc0007e9680) (1) Data frame sent\nI0725 10:32:35.774947 30 log.go:172] (0xc00068adc0) (0xc0007e9680) Stream removed, broadcasting: 1\nI0725 10:32:35.774968 30 log.go:172] (0xc00068adc0) Go away received\nI0725 10:32:35.775392 30 log.go:172] (0xc00068adc0) (0xc0007e9680) Stream removed, broadcasting: 1\nI0725 10:32:35.775415 30 log.go:172] (0xc00068adc0) (0xc000b74000) Stream removed, broadcasting: 3\nI0725 10:32:35.775433 30 log.go:172] (0xc00068adc0) (0xc000546000) Stream removed, broadcasting: 5\n" Jul 25 10:32:35.780: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 25 10:32:35.780: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 25 10:32:35.784: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 25 10:32:45.854: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 25 10:32:45.854: INFO: Waiting for statefulset status.replicas updated to 0 Jul 25 10:32:46.284: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999625s Jul 25 10:32:47.290: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.791695869s Jul 25 10:32:48.294: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.785992202s Jul 25 10:32:49.298: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.781304087s Jul 25 10:32:50.304: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.777286932s Jul 25 10:32:51.330: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.771985594s Jul 25 10:32:52.334: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.745787192s Jul 25 10:32:53.339: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.74173344s Jul 25 10:32:54.344: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.737119317s Jul 25 10:32:55.348: INFO: Verifying statefulset ss doesn't scale past 1 for another 732.062895ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2117 Jul 25 10:32:56.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2117 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 25 10:32:56.612: INFO: stderr: "I0725 10:32:56.508270 53 log.go:172] (0xc00003a9a0) (0xc0008117c0) Create stream\nI0725 10:32:56.508334 53 log.go:172] (0xc00003a9a0) (0xc0008117c0) Stream added, broadcasting: 1\nI0725 10:32:56.510893 53 log.go:172] (0xc00003a9a0) Reply frame received for 1\nI0725 10:32:56.510926 53 log.go:172] (0xc00003a9a0) (0xc000811860) Create stream\nI0725 10:32:56.510934 53 log.go:172] (0xc00003a9a0) (0xc000811860) Stream added, broadcasting: 3\nI0725 10:32:56.511842 53 log.go:172] (0xc00003a9a0) Reply frame received for 3\nI0725 10:32:56.511870 53 log.go:172] (0xc00003a9a0) (0xc00068b720) Create stream\nI0725 10:32:56.511881 53 log.go:172] (0xc00003a9a0) (0xc00068b720) Stream added, broadcasting: 5\nI0725 10:32:56.512650 53 log.go:172] (0xc00003a9a0) Reply frame received for 5\nI0725 10:32:56.605411 53 log.go:172] (0xc00003a9a0) Data frame received for 5\nI0725 10:32:56.605445 53 log.go:172] (0xc00068b720) (5) Data frame handling\nI0725 10:32:56.605461 53 log.go:172] (0xc00068b720) (5) Data frame sent\nI0725 10:32:56.605470 53 log.go:172] (0xc00003a9a0) Data frame received for 5\nI0725 10:32:56.605477 53 log.go:172] (0xc00068b720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0725 10:32:56.605501 53 log.go:172] (0xc00003a9a0) Data frame received for 3\nI0725 10:32:56.605507 53 log.go:172] (0xc000811860) (3) Data frame handling\nI0725 10:32:56.605512 53 log.go:172] (0xc000811860) (3) Data frame sent\nI0725 10:32:56.605517 53 log.go:172] (0xc00003a9a0) Data frame received for 3\nI0725 10:32:56.605524 53 log.go:172] (0xc000811860) (3) Data frame handling\nI0725 10:32:56.607108 53 log.go:172] (0xc00003a9a0) Data frame received for 1\nI0725 10:32:56.607119 53 log.go:172] (0xc0008117c0) (1) Data frame handling\nI0725 10:32:56.607125 53 log.go:172] (0xc0008117c0) (1) Data frame sent\nI0725 10:32:56.607133 53 log.go:172] (0xc00003a9a0) (0xc0008117c0) Stream removed, broadcasting: 1\nI0725 10:32:56.607408 53 log.go:172] (0xc00003a9a0) Go away received\nI0725 10:32:56.607460 53 log.go:172] (0xc00003a9a0) (0xc0008117c0) Stream removed, broadcasting: 1\nI0725 10:32:56.607478 53 log.go:172] (0xc00003a9a0) (0xc000811860) Stream removed, broadcasting: 3\nI0725 10:32:56.607484 53 log.go:172] (0xc00003a9a0) (0xc00068b720) Stream removed, broadcasting: 5\n" Jul 25 10:32:56.612: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 25 10:32:56.612: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 25 10:32:56.616: INFO: Found 1 stateful pods, waiting for 3 Jul 25 10:33:06.743: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 25 10:33:06.743: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 25 10:33:06.743: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 25 10:33:06.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2117 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 25 10:33:07.060: INFO: stderr: "I0725 10:33:06.979181 75 log.go:172] (0xc0009faf20) (0xc000ac2460) Create stream\nI0725 10:33:06.979254 75 log.go:172] (0xc0009faf20) (0xc000ac2460) Stream added, broadcasting: 1\nI0725 10:33:06.984835 75 log.go:172] (0xc0009faf20) Reply frame received for 1\nI0725 10:33:06.984869 75 log.go:172] (0xc0009faf20) (0xc0005b1720) Create stream\nI0725 10:33:06.984879 75 log.go:172] (0xc0009faf20) (0xc0005b1720) Stream added, broadcasting: 3\nI0725 10:33:06.985943 75 log.go:172] (0xc0009faf20) Reply frame received for 3\nI0725 10:33:06.986011 75 log.go:172] (0xc0009faf20) (0xc00044cb40) Create stream\nI0725 10:33:06.986037 75 log.go:172] (0xc0009faf20) (0xc00044cb40) Stream added, broadcasting: 5\nI0725 10:33:06.986949 75 log.go:172] (0xc0009faf20) Reply frame received for 5\nI0725 10:33:07.052364 75 log.go:172] (0xc0009faf20) Data frame received for 5\nI0725 10:33:07.052424 75 log.go:172] (0xc00044cb40) (5) Data frame handling\nI0725 10:33:07.052440 75 log.go:172] (0xc00044cb40) (5) Data frame sent\nI0725 10:33:07.052452 75 log.go:172] (0xc0009faf20) Data frame received for 5\nI0725 10:33:07.052462 75 log.go:172] (0xc00044cb40) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 10:33:07.052487 75 log.go:172] (0xc0009faf20) Data frame received for 3\nI0725 10:33:07.052498 75 log.go:172] (0xc0005b1720) (3) Data frame handling\nI0725 10:33:07.052515 75 log.go:172] (0xc0005b1720) (3) Data frame sent\nI0725 10:33:07.052534 75 log.go:172] (0xc0009faf20) Data frame received for 3\nI0725 10:33:07.052544 75 log.go:172] (0xc0005b1720) (3) Data frame handling\nI0725 10:33:07.053508 75 log.go:172] (0xc0009faf20) Data frame received for 1\nI0725 10:33:07.053543 75 log.go:172] (0xc000ac2460) (1) Data frame handling\nI0725 10:33:07.053563 75 log.go:172] (0xc000ac2460) (1) Data frame sent\nI0725 10:33:07.053596 75 log.go:172] (0xc0009faf20) (0xc000ac2460) Stream removed, broadcasting: 1\nI0725 10:33:07.053659 75 log.go:172] (0xc0009faf20) Go away received\nI0725 10:33:07.054129 75 log.go:172] (0xc0009faf20) (0xc000ac2460) Stream removed, broadcasting: 1\nI0725 10:33:07.054165 75 log.go:172] (0xc0009faf20) (0xc0005b1720) Stream removed, broadcasting: 3\nI0725 10:33:07.054188 75 log.go:172] (0xc0009faf20) (0xc00044cb40) Stream removed, broadcasting: 5\n" Jul 25 10:33:07.060: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 25 10:33:07.060: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 25 10:33:07.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2117 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 25 10:33:07.585: INFO: stderr: "I0725 10:33:07.392421 98 log.go:172] (0xc0009e0a50) (0xc0009a6320) Create stream\nI0725 10:33:07.392491 98 log.go:172] (0xc0009e0a50) (0xc0009a6320) Stream added, broadcasting: 1\nI0725 10:33:07.397088 98 log.go:172] (0xc0009e0a50) Reply frame received for 1\nI0725 10:33:07.397155 98 log.go:172] (0xc0009e0a50) (0xc0009a63c0) Create stream\nI0725 10:33:07.397175 98 log.go:172] (0xc0009e0a50) (0xc0009a63c0) Stream added, broadcasting: 3\nI0725 10:33:07.398342 98 log.go:172] (0xc0009e0a50) Reply frame received for 3\nI0725 10:33:07.398377 98 log.go:172] (0xc0009e0a50) (0xc00042eb40) Create stream\nI0725 10:33:07.398388 98 log.go:172] (0xc0009e0a50) (0xc00042eb40) Stream added, broadcasting: 5\nI0725 10:33:07.399648 98 log.go:172] (0xc0009e0a50) Reply frame received for 5\nI0725 10:33:07.456403 98 log.go:172] (0xc0009e0a50) Data frame received for 5\nI0725 10:33:07.456447 98 log.go:172] (0xc00042eb40) (5) Data frame handling\nI0725 10:33:07.456479 98 log.go:172] (0xc00042eb40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 10:33:07.579263 98 log.go:172] (0xc0009e0a50) Data frame received for 3\nI0725 10:33:07.579321 98 log.go:172] (0xc0009a63c0) (3) Data frame handling\nI0725 10:33:07.579339 98 log.go:172] (0xc0009a63c0) (3) Data frame sent\nI0725 10:33:07.579356 98 log.go:172] (0xc0009e0a50) Data frame received for 5\nI0725 10:33:07.579373 98 log.go:172] (0xc00042eb40) (5) Data frame handling\nI0725 10:33:07.579394 98 log.go:172] (0xc0009e0a50) Data frame received for 3\nI0725 10:33:07.579401 98 log.go:172] (0xc0009a63c0) (3) Data frame handling\nI0725 10:33:07.581680 98 log.go:172] (0xc0009e0a50) Data frame received for 1\nI0725 10:33:07.581697 98 log.go:172] (0xc0009a6320) (1) Data frame handling\nI0725 10:33:07.581706 98 log.go:172] (0xc0009a6320) (1) Data frame sent\nI0725 10:33:07.581776 98 log.go:172] (0xc0009e0a50) (0xc0009a6320) Stream removed, broadcasting: 1\nI0725 10:33:07.581937 98 log.go:172] (0xc0009e0a50) Go away received\nI0725 10:33:07.582030 98 log.go:172] (0xc0009e0a50) (0xc0009a6320) Stream removed, broadcasting: 1\nI0725 10:33:07.582047 98 log.go:172] (0xc0009e0a50) (0xc0009a63c0) Stream removed, broadcasting: 3\nI0725 10:33:07.582055 98 log.go:172] (0xc0009e0a50) (0xc00042eb40) Stream removed, broadcasting: 5\n" Jul 25 10:33:07.586: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 25 10:33:07.586: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 25 10:33:07.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2117 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 25 10:33:07.989: INFO: stderr: "I0725 10:33:07.853362 118 log.go:172] (0xc0000e8d10) (0xc00041c140) Create stream\nI0725 10:33:07.853441 118 log.go:172] (0xc0000e8d10) (0xc00041c140) Stream added, broadcasting: 1\nI0725 10:33:07.856910 118 log.go:172] (0xc0000e8d10) Reply frame received for 1\nI0725 10:33:07.856966 118 log.go:172] (0xc0000e8d10) (0xc00041c280) Create stream\nI0725 10:33:07.856982 118 log.go:172] (0xc0000e8d10) (0xc00041c280) Stream added, broadcasting: 3\nI0725 10:33:07.858005 118 log.go:172] (0xc0000e8d10) Reply frame received for 3\nI0725 10:33:07.858051 118 log.go:172] (0xc0000e8d10) (0xc0009f8000) Create stream\nI0725 10:33:07.858066 118 log.go:172] (0xc0000e8d10) (0xc0009f8000) Stream added, broadcasting: 5\nI0725 10:33:07.859017 118 log.go:172] (0xc0000e8d10) Reply frame received for 5\nI0725 10:33:07.913149 118 log.go:172] (0xc0000e8d10) Data frame received for 5\nI0725 10:33:07.913175 118 log.go:172] (0xc0009f8000) (5) Data frame handling\nI0725 10:33:07.913192 118 log.go:172] (0xc0009f8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 10:33:07.981913 118 log.go:172] (0xc0000e8d10) Data frame received for 3\nI0725 10:33:07.981939 118 log.go:172] (0xc00041c280) (3) Data frame handling\nI0725 10:33:07.981946 118 log.go:172] (0xc00041c280) (3) Data frame sent\nI0725 10:33:07.981951 118 log.go:172] (0xc0000e8d10) Data frame received for 3\nI0725 10:33:07.981955 118 log.go:172] (0xc00041c280) (3) Data frame handling\nI0725 10:33:07.982143 118 log.go:172] (0xc0000e8d10) Data frame received for 5\nI0725 10:33:07.982153 118 log.go:172] (0xc0009f8000) (5) Data frame handling\nI0725 10:33:07.984133 118 log.go:172] (0xc0000e8d10) Data frame received for 1\nI0725 10:33:07.984153 118 log.go:172] (0xc00041c140) (1) Data frame handling\nI0725 10:33:07.984165 118 log.go:172] (0xc00041c140) (1) Data frame sent\nI0725 10:33:07.984180 118 log.go:172] (0xc0000e8d10) (0xc00041c140) Stream removed, broadcasting: 1\nI0725 10:33:07.984227 118 log.go:172] (0xc0000e8d10) Go away received\nI0725 10:33:07.984419 118 log.go:172] (0xc0000e8d10) (0xc00041c140) Stream removed, broadcasting: 1\nI0725 10:33:07.984430 118 log.go:172] (0xc0000e8d10) (0xc00041c280) Stream removed, broadcasting: 3\nI0725 10:33:07.984436 118 log.go:172] (0xc0000e8d10) (0xc0009f8000) Stream removed, broadcasting: 5\n" Jul 25 10:33:07.989: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 25 10:33:07.989: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 25 10:33:07.989: INFO: Waiting for statefulset status.replicas updated to 0 Jul 25 10:33:08.029: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 25 10:33:18.036: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 25 10:33:18.036: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 25 10:33:18.036: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 25 10:33:18.049: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999358s Jul 25 10:33:19.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993706788s Jul 25 10:33:20.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98914299s Jul 25 10:33:21.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984700175s Jul 25 10:33:22.067: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98065088s Jul 25 10:33:23.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975621071s Jul 25 10:33:24.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.943672348s Jul 25 10:33:25.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.921252567s Jul 25 10:33:26.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.911799814s Jul 25 10:33:27.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 906.038682ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2117 Jul 25 10:33:28.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2117 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 25 10:33:28.366: INFO: stderr: "I0725 10:33:28.277491 138 log.go:172] (0xc000abf080) (0xc0009c0640) Create stream\nI0725 10:33:28.277573 138 log.go:172] (0xc000abf080) (0xc0009c0640) Stream added, broadcasting: 1\nI0725 10:33:28.281697 138 log.go:172] (0xc000abf080) Reply frame received for 1\nI0725 10:33:28.281768 138 log.go:172] (0xc000abf080) (0xc0005777c0) Create stream\nI0725 10:33:28.281791 138 log.go:172] (0xc000abf080) (0xc0005777c0) Stream added, broadcasting: 3\nI0725 10:33:28.283420 138 log.go:172] (0xc000abf080) Reply frame received for 3\nI0725 10:33:28.283457 138 log.go:172] (0xc000abf080) (0xc000420be0) Create stream\nI0725 10:33:28.283474 138 log.go:172] (0xc000abf080) (0xc000420be0) Stream added, broadcasting: 5\nI0725 10:33:28.284267 138 log.go:172] (0xc000abf080) Reply frame received for 5\nI0725 10:33:28.358803 138 log.go:172] (0xc000abf080) Data frame received for 5\nI0725 10:33:28.358860 138 log.go:172] (0xc000420be0) (5) Data frame handling\nI0725 10:33:28.358883 138 log.go:172] (0xc000420be0) (5) Data frame sent\nI0725 10:33:28.358898 138 log.go:172] (0xc000abf080) Data frame received for 5\nI0725 10:33:28.358913 138 log.go:172] (0xc000420be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0725 10:33:28.358959 138 log.go:172] (0xc000abf080) Data frame received for 3\nI0725 10:33:28.358986 138 log.go:172] (0xc0005777c0) (3) Data frame handling\nI0725 10:33:28.359012 138 log.go:172] (0xc0005777c0) (3) Data frame sent\nI0725 10:33:28.359029 138 log.go:172] (0xc000abf080) Data frame received for 3\nI0725 10:33:28.359038 138 log.go:172] (0xc0005777c0) (3) Data frame handling\nI0725 10:33:28.360382 138 log.go:172] (0xc000abf080) Data frame received for 1\nI0725 10:33:28.360413 138 log.go:172] (0xc0009c0640) (1) Data frame handling\nI0725 10:33:28.360434 138 log.go:172] (0xc0009c0640) (1) Data frame sent\nI0725 10:33:28.360457 138 log.go:172] (0xc000abf080) (0xc0009c0640) Stream removed, broadcasting: 1\nI0725 10:33:28.360482 138 log.go:172] (0xc000abf080) Go away received\nI0725 10:33:28.361006 138 log.go:172] (0xc000abf080) (0xc0009c0640) Stream removed, broadcasting: 1\nI0725 10:33:28.361031 138 log.go:172] (0xc000abf080) (0xc0005777c0) Stream removed, broadcasting: 3\nI0725 10:33:28.361041 138 log.go:172] (0xc000abf080) (0xc000420be0) Stream removed, broadcasting: 5\n" Jul 25 10:33:28.366: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 25 10:33:28.366: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 25 10:33:28.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2117 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 25 10:33:28.573: INFO: stderr: "I0725 10:33:28.501433 159 log.go:172] (0xc0009ab550) (0xc000850960) Create stream\nI0725 10:33:28.501520 159 log.go:172] (0xc0009ab550) (0xc000850960) Stream added, broadcasting: 1\nI0725 10:33:28.506396 159 log.go:172] (0xc0009ab550) Reply frame received for 1\nI0725 10:33:28.506434 159 log.go:172] (0xc0009ab550) (0xc00051da40) Create stream\nI0725 10:33:28.506450 159 log.go:172] (0xc0009ab550) (0xc00051da40) Stream added, broadcasting: 3\nI0725 10:33:28.507483 159 log.go:172] (0xc0009ab550) Reply frame received for 3\nI0725 10:33:28.507504 159 log.go:172] (0xc0009ab550) (0xc0005717c0) Create stream\nI0725 10:33:28.507511 159 log.go:172] (0xc0009ab550) (0xc0005717c0) Stream added, broadcasting: 5\nI0725 10:33:28.508447 159 log.go:172] (0xc0009ab550) Reply frame received for 5\nI0725 10:33:28.565486 159 log.go:172] (0xc0009ab550) Data frame received for 3\nI0725 10:33:28.565544 159 log.go:172] (0xc00051da40) (3) Data frame handling\nI0725 10:33:28.565571 159 log.go:172] (0xc00051da40) (3) Data frame sent\nI0725 10:33:28.565589 159 log.go:172] (0xc0009ab550) Data frame received for 3\nI0725 10:33:28.565603 159 log.go:172] (0xc00051da40) (3) Data frame handling\nI0725 10:33:28.565630 159 log.go:172] (0xc0009ab550) Data frame received for 5\nI0725 10:33:28.565657 159 log.go:172] (0xc0005717c0) (5) Data frame handling\nI0725 10:33:28.565678 159 log.go:172] (0xc0005717c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0725 10:33:28.565695 159 log.go:172] (0xc0009ab550) Data frame received for 5\nI0725 10:33:28.565731 159 log.go:172] (0xc0005717c0) (5) Data frame handling\nI0725 10:33:28.567027 159 log.go:172] (0xc0009ab550) Data frame received for 1\nI0725 10:33:28.567076 159 log.go:172] (0xc000850960) (1) Data frame handling\nI0725 10:33:28.567108 159 log.go:172] (0xc000850960) (1) Data frame sent\nI0725 10:33:28.567134 159 log.go:172] (0xc0009ab550) (0xc000850960) Stream removed, broadcasting: 1\nI0725 10:33:28.567177 159 log.go:172] (0xc0009ab550) Go away received\nI0725 10:33:28.567658 159 log.go:172] (0xc0009ab550) (0xc000850960) Stream removed, broadcasting: 1\nI0725 10:33:28.567680 159 log.go:172] (0xc0009ab550) (0xc00051da40) Stream removed, broadcasting: 3\nI0725 10:33:28.567691 159 log.go:172] (0xc0009ab550) (0xc0005717c0) Stream removed, broadcasting: 5\n" Jul 25 10:33:28.574: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 25 10:33:28.574: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 25 10:33:28.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2117 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 25 10:33:28.821: INFO: stderr: "I0725 10:33:28.751344 180 log.go:172] (0xc00003ab00) (0xc0009300a0) Create stream\nI0725 10:33:28.751412 180 log.go:172] (0xc00003ab00) (0xc0009300a0) Stream added, broadcasting: 1\nI0725 10:33:28.753840 180 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0725 10:33:28.753867 180 log.go:172] (0xc00003ab00) (0xc0009301e0) Create stream\nI0725 10:33:28.753874 180 log.go:172] (0xc00003ab00) (0xc0009301e0) Stream added, broadcasting: 3\nI0725 10:33:28.755955 180 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0725 10:33:28.756083 180 log.go:172] (0xc00003ab00) (0xc00091a000) Create stream\nI0725 10:33:28.756181 180 log.go:172] (0xc00003ab00) (0xc00091a000) Stream added, broadcasting: 5\nI0725 10:33:28.757448 180 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0725 10:33:28.814993 180 log.go:172] (0xc00003ab00) Data frame received for 5\nI0725 10:33:28.815020 180 log.go:172] (0xc00091a000) (5) Data frame handling\nI0725 10:33:28.815029 180 log.go:172] (0xc00091a000) (5) Data frame sent\nI0725 10:33:28.815036 180 log.go:172] (0xc00003ab00) Data frame received for 5\nI0725 10:33:28.815046 180 log.go:172] (0xc00091a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0725 10:33:28.815078 180 log.go:172] (0xc00003ab00) Data frame received for 3\nI0725 10:33:28.815089 180 log.go:172] (0xc0009301e0) (3) Data frame handling\nI0725 10:33:28.815099 180 log.go:172] (0xc0009301e0) (3) Data frame sent\nI0725 10:33:28.815107 180 log.go:172] (0xc00003ab00) Data frame received for 3\nI0725 10:33:28.815113 180 log.go:172] (0xc0009301e0) (3) Data frame handling\nI0725 10:33:28.816118 180 log.go:172] (0xc00003ab00) Data frame received for 1\nI0725 10:33:28.816155 180 log.go:172] (0xc0009300a0) (1) Data frame handling\nI0725 10:33:28.816168 180 log.go:172] (0xc0009300a0) (1) Data frame sent\nI0725 10:33:28.816195 180 log.go:172] (0xc00003ab00) (0xc0009300a0) Stream removed, broadcasting: 1\nI0725 10:33:28.816217 180 log.go:172] (0xc00003ab00) Go away received\nI0725 10:33:28.816587 180 log.go:172] (0xc00003ab00) (0xc0009300a0) Stream removed, broadcasting: 1\nI0725 10:33:28.816598 180 log.go:172] (0xc00003ab00) (0xc0009301e0) Stream removed, broadcasting: 3\nI0725 10:33:28.816604 180 log.go:172] (0xc00003ab00) (0xc00091a000) Stream removed, broadcasting: 5\n" Jul 25 10:33:28.821: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 25 10:33:28.821: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 25 10:33:28.821: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jul 25 10:33:58.835: INFO: Deleting all statefulset in ns statefulset-2117 Jul 25 10:33:58.838: INFO: Scaling statefulset ss to 0 Jul 25 10:33:58.846: INFO: Waiting for statefulset status.replicas updated to 0 Jul 25 10:33:58.849: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 25 10:33:58.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2117" for this suite. • [SLOW TEST:97.529 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":4,"skipped":33,"failed":0} SSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 25 10:33:58.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jul 25 10:33:58.961: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jul 25 10:33:58.982: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jul 25 10:33:58.982: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jul 25 10:33:58.999: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jul 25 10:33:58.999: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jul 25 10:33:59.050: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jul 25 10:33:59.051: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jul 25 10:34:06.369: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 25 10:34:06.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-946" for this suite. • [SLOW TEST:7.699 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":5,"skipped":39,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 25 10:34:06.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-65756dfb-2ce2-48af-a1c2-6965ef24aee5 STEP: Creating a pod to test consume secrets Jul 25 10:34:06.891: INFO: Waiting up to 5m0s for pod "pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705" in namespace "secrets-4267" to be "Succeeded or Failed" Jul 25 10:34:06.901: INFO: Pod "pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705": Phase="Pending", Reason="", readiness=false. Elapsed: 9.452998ms Jul 25 10:34:08.943: INFO: Pod "pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051521749s Jul 25 10:34:10.977: INFO: Pod "pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08574199s Jul 25 10:34:13.135: INFO: Pod "pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.243689645s STEP: Saw pod success Jul 25 10:34:13.135: INFO: Pod "pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705" satisfied condition "Succeeded or Failed" Jul 25 10:34:13.138: INFO: Trying to get logs from node kali-worker pod pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705 container secret-volume-test: STEP: delete the pod Jul 25 10:34:13.282: INFO: Waiting for pod pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705 to disappear Jul 25 10:34:13.470: INFO: Pod pod-secrets-fff454ab-ead9-46e4-9268-4b209eec2705 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 25 10:34:13.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4267" for this suite. • [SLOW TEST:6.904 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 25 10:34:13.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-ztnq STEP: Creating a pod to test atomic-volume-subpath Jul 25 10:34:14.207: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ztnq" in namespace "subpath-5690" to be "Succeeded or Failed" Jul 25 10:34:14.470: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Pending", Reason="", readiness=false. Elapsed: 263.37576ms Jul 25 10:34:16.515: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307676434s Jul 25 10:34:18.522: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 4.315414605s Jul 25 10:34:20.527: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 6.319897405s Jul 25 10:34:22.531: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 8.323976968s Jul 25 10:34:24.535: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 10.328531435s Jul 25 10:34:26.540: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 12.33266538s Jul 25 10:34:28.543: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 14.336492818s Jul 25 10:34:30.548: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 16.340623596s Jul 25 10:34:32.552: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 18.345124911s Jul 25 10:34:34.556: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 20.348980101s Jul 25 10:34:36.584: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Running", Reason="", readiness=true. Elapsed: 22.376688972s Jul 25 10:34:38.596: INFO: Pod "pod-subpath-test-configmap-ztnq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.389061555s STEP: Saw pod success Jul 25 10:34:38.596: INFO: Pod "pod-subpath-test-configmap-ztnq" satisfied condition "Succeeded or Failed" Jul 25 10:34:38.598: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-ztnq container test-container-subpath-configmap-ztnq: STEP: delete the pod Jul 25 10:34:38.658: INFO: Waiting for pod pod-subpath-test-configmap-ztnq to disappear Jul 25 10:34:38.662: INFO: Pod pod-subpath-test-configmap-ztnq no longer exists STEP: Deleting pod pod-subpath-test-configmap-ztnq Jul 25 10:34:38.662: INFO: Deleting pod "pod-subpath-test-configmap-ztnq" in namespace "subpath-5690" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 25 10:34:38.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5690" for this suite. • [SLOW TEST:25.191 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":7,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 25 10:34:38.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 25 10:34:38.956: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul 25 10:34:45.732: INFO: 10 pods remaining
Jul 25 10:34:45.732: INFO: 10 pods has nil DeletionTimestamp
Jul 25 10:34:45.732: INFO: 
Jul 25 10:34:47.356: INFO: 0 pods remaining
Jul 25 10:34:47.357: INFO: 0 pods has nil DeletionTimestamp
Jul 25 10:34:47.357: INFO: 
Jul 25 10:34:48.758: INFO: 0 pods remaining
Jul 25 10:34:48.758: INFO: 0 pods has nil DeletionTimestamp
Jul 25 10:34:48.758: INFO: 
STEP: Gathering metrics
W0725 10:34:49.804421       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 25 10:34:49.804: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:34:49.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3760" for this suite.

• [SLOW TEST:11.180 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":9,"skipped":107,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:34:50.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-5994
STEP: creating replication controller nodeport-test in namespace services-5994
I0725 10:34:51.834503       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5994, replica count: 2
I0725 10:34:54.885097       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:34:57.885313       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 25 10:34:57.885: INFO: Creating new exec pod
Jul 25 10:35:02.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5994 execpodnkwbg -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jul 25 10:35:03.144: INFO: stderr: "I0725 10:35:03.053769     201 log.go:172] (0xc000b48b00) (0xc000652460) Create stream\nI0725 10:35:03.053841     201 log.go:172] (0xc000b48b00) (0xc000652460) Stream added, broadcasting: 1\nI0725 10:35:03.056390     201 log.go:172] (0xc000b48b00) Reply frame received for 1\nI0725 10:35:03.056432     201 log.go:172] (0xc000b48b00) (0xc000652500) Create stream\nI0725 10:35:03.056442     201 log.go:172] (0xc000b48b00) (0xc000652500) Stream added, broadcasting: 3\nI0725 10:35:03.057650     201 log.go:172] (0xc000b48b00) Reply frame received for 3\nI0725 10:35:03.057694     201 log.go:172] (0xc000b48b00) (0xc0006525a0) Create stream\nI0725 10:35:03.057705     201 log.go:172] (0xc000b48b00) (0xc0006525a0) Stream added, broadcasting: 5\nI0725 10:35:03.058619     201 log.go:172] (0xc000b48b00) Reply frame received for 5\nI0725 10:35:03.135253     201 log.go:172] (0xc000b48b00) Data frame received for 5\nI0725 10:35:03.135284     201 log.go:172] (0xc0006525a0) (5) Data frame handling\nI0725 10:35:03.135302     201 log.go:172] (0xc0006525a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0725 10:35:03.135752     201 log.go:172] (0xc000b48b00) Data frame received for 5\nI0725 10:35:03.135765     201 log.go:172] (0xc0006525a0) (5) Data frame handling\nI0725 10:35:03.135778     201 log.go:172] (0xc0006525a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0725 10:35:03.136022     201 log.go:172] (0xc000b48b00) Data frame received for 5\nI0725 10:35:03.136034     201 log.go:172] (0xc0006525a0) (5) Data frame handling\nI0725 10:35:03.136058     201 log.go:172] (0xc000b48b00) Data frame received for 3\nI0725 10:35:03.136091     201 log.go:172] (0xc000652500) (3) Data frame handling\nI0725 10:35:03.137603     201 log.go:172] (0xc000b48b00) Data frame received for 1\nI0725 10:35:03.137615     201 log.go:172] (0xc000652460) (1) Data frame handling\nI0725 10:35:03.137629     201 log.go:172] (0xc000652460) (1) Data frame sent\nI0725 10:35:03.137659     201 log.go:172] (0xc000b48b00) (0xc000652460) Stream removed, broadcasting: 1\nI0725 10:35:03.137679     201 log.go:172] (0xc000b48b00) Go away received\nI0725 10:35:03.138109     201 log.go:172] (0xc000b48b00) (0xc000652460) Stream removed, broadcasting: 1\nI0725 10:35:03.138133     201 log.go:172] (0xc000b48b00) (0xc000652500) Stream removed, broadcasting: 3\nI0725 10:35:03.138148     201 log.go:172] (0xc000b48b00) (0xc0006525a0) Stream removed, broadcasting: 5\n"
Jul 25 10:35:03.144: INFO: stdout: ""
Jul 25 10:35:03.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5994 execpodnkwbg -- /bin/sh -x -c nc -zv -t -w 2 10.110.85.19 80'
Jul 25 10:35:03.358: INFO: stderr: "I0725 10:35:03.281017     224 log.go:172] (0xc000a0e000) (0xc000a8c000) Create stream\nI0725 10:35:03.281098     224 log.go:172] (0xc000a0e000) (0xc000a8c000) Stream added, broadcasting: 1\nI0725 10:35:03.286605     224 log.go:172] (0xc000a0e000) Reply frame received for 1\nI0725 10:35:03.286647     224 log.go:172] (0xc000a0e000) (0xc000a8c0a0) Create stream\nI0725 10:35:03.286657     224 log.go:172] (0xc000a0e000) (0xc000a8c0a0) Stream added, broadcasting: 3\nI0725 10:35:03.287562     224 log.go:172] (0xc000a0e000) Reply frame received for 3\nI0725 10:35:03.287594     224 log.go:172] (0xc000a0e000) (0xc000a8c140) Create stream\nI0725 10:35:03.287613     224 log.go:172] (0xc000a0e000) (0xc000a8c140) Stream added, broadcasting: 5\nI0725 10:35:03.288489     224 log.go:172] (0xc000a0e000) Reply frame received for 5\nI0725 10:35:03.351642     224 log.go:172] (0xc000a0e000) Data frame received for 3\nI0725 10:35:03.351680     224 log.go:172] (0xc000a8c0a0) (3) Data frame handling\nI0725 10:35:03.351698     224 log.go:172] (0xc000a0e000) Data frame received for 5\nI0725 10:35:03.351704     224 log.go:172] (0xc000a8c140) (5) Data frame handling\nI0725 10:35:03.351729     224 log.go:172] (0xc000a8c140) (5) Data frame sent\nI0725 10:35:03.351738     224 log.go:172] (0xc000a0e000) Data frame received for 5\nI0725 10:35:03.351760     224 log.go:172] (0xc000a8c140) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.85.19 80\nConnection to 10.110.85.19 80 port [tcp/http] succeeded!\nI0725 10:35:03.353472     224 log.go:172] (0xc000a0e000) Data frame received for 1\nI0725 10:35:03.353503     224 log.go:172] (0xc000a8c000) (1) Data frame handling\nI0725 10:35:03.353516     224 log.go:172] (0xc000a8c000) (1) Data frame sent\nI0725 10:35:03.353532     224 log.go:172] (0xc000a0e000) (0xc000a8c000) Stream removed, broadcasting: 1\nI0725 10:35:03.353809     224 log.go:172] (0xc000a0e000) (0xc000a8c000) Stream removed, broadcasting: 1\nI0725 10:35:03.353833     224 log.go:172] (0xc000a0e000) (0xc000a8c0a0) Stream removed, broadcasting: 3\nI0725 10:35:03.353839     224 log.go:172] (0xc000a0e000) (0xc000a8c140) Stream removed, broadcasting: 5\n"
Jul 25 10:35:03.358: INFO: stdout: ""
Jul 25 10:35:03.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5994 execpodnkwbg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31189'
Jul 25 10:35:03.583: INFO: stderr: "I0725 10:35:03.495736     246 log.go:172] (0xc00003a420) (0xc0005e15e0) Create stream\nI0725 10:35:03.495781     246 log.go:172] (0xc00003a420) (0xc0005e15e0) Stream added, broadcasting: 1\nI0725 10:35:03.498773     246 log.go:172] (0xc00003a420) Reply frame received for 1\nI0725 10:35:03.498823     246 log.go:172] (0xc00003a420) (0xc000454a00) Create stream\nI0725 10:35:03.498841     246 log.go:172] (0xc00003a420) (0xc000454a00) Stream added, broadcasting: 3\nI0725 10:35:03.499934     246 log.go:172] (0xc00003a420) Reply frame received for 3\nI0725 10:35:03.499979     246 log.go:172] (0xc00003a420) (0xc000454aa0) Create stream\nI0725 10:35:03.499996     246 log.go:172] (0xc00003a420) (0xc000454aa0) Stream added, broadcasting: 5\nI0725 10:35:03.501126     246 log.go:172] (0xc00003a420) Reply frame received for 5\nI0725 10:35:03.575544     246 log.go:172] (0xc00003a420) Data frame received for 3\nI0725 10:35:03.575610     246 log.go:172] (0xc00003a420) Data frame received for 5\nI0725 10:35:03.575651     246 log.go:172] (0xc000454aa0) (5) Data frame handling\nI0725 10:35:03.575669     246 log.go:172] (0xc000454aa0) (5) Data frame sent\nI0725 10:35:03.575678     246 log.go:172] (0xc00003a420) Data frame received for 5\nI0725 10:35:03.575686     246 log.go:172] (0xc000454aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31189\nConnection to 172.18.0.13 31189 port [tcp/31189] succeeded!\nI0725 10:35:03.575711     246 log.go:172] (0xc000454a00) (3) Data frame handling\nI0725 10:35:03.577515     246 log.go:172] (0xc00003a420) Data frame received for 1\nI0725 10:35:03.577549     246 log.go:172] (0xc0005e15e0) (1) Data frame handling\nI0725 10:35:03.577601     246 log.go:172] (0xc0005e15e0) (1) Data frame sent\nI0725 10:35:03.577633     246 log.go:172] (0xc00003a420) (0xc0005e15e0) Stream removed, broadcasting: 1\nI0725 10:35:03.577652     246 log.go:172] (0xc00003a420) Go away received\nI0725 10:35:03.578034     246 log.go:172] (0xc00003a420) (0xc0005e15e0) Stream removed, broadcasting: 1\nI0725 10:35:03.578057     246 log.go:172] (0xc00003a420) (0xc000454a00) Stream removed, broadcasting: 3\nI0725 10:35:03.578067     246 log.go:172] (0xc00003a420) (0xc000454aa0) Stream removed, broadcasting: 5\n"
Jul 25 10:35:03.583: INFO: stdout: ""
Jul 25 10:35:03.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5994 execpodnkwbg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31189'
Jul 25 10:35:03.792: INFO: stderr: "I0725 10:35:03.708436     268 log.go:172] (0xc0009ffad0) (0xc0009f4aa0) Create stream\nI0725 10:35:03.708483     268 log.go:172] (0xc0009ffad0) (0xc0009f4aa0) Stream added, broadcasting: 1\nI0725 10:35:03.712355     268 log.go:172] (0xc0009ffad0) Reply frame received for 1\nI0725 10:35:03.712410     268 log.go:172] (0xc0009ffad0) (0xc0007dd680) Create stream\nI0725 10:35:03.712433     268 log.go:172] (0xc0009ffad0) (0xc0007dd680) Stream added, broadcasting: 3\nI0725 10:35:03.713302     268 log.go:172] (0xc0009ffad0) Reply frame received for 3\nI0725 10:35:03.713329     268 log.go:172] (0xc0009ffad0) (0xc000540aa0) Create stream\nI0725 10:35:03.713336     268 log.go:172] (0xc0009ffad0) (0xc000540aa0) Stream added, broadcasting: 5\nI0725 10:35:03.713900     268 log.go:172] (0xc0009ffad0) Reply frame received for 5\nI0725 10:35:03.785365     268 log.go:172] (0xc0009ffad0) Data frame received for 5\nI0725 10:35:03.785388     268 log.go:172] (0xc000540aa0) (5) Data frame handling\nI0725 10:35:03.785402     268 log.go:172] (0xc000540aa0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 31189\nConnection to 172.18.0.15 31189 port [tcp/31189] succeeded!\nI0725 10:35:03.785589     268 log.go:172] (0xc0009ffad0) Data frame received for 5\nI0725 10:35:03.785611     268 log.go:172] (0xc000540aa0) (5) Data frame handling\nI0725 10:35:03.785644     268 log.go:172] (0xc0009ffad0) Data frame received for 3\nI0725 10:35:03.785674     268 log.go:172] (0xc0007dd680) (3) Data frame handling\nI0725 10:35:03.787352     268 log.go:172] (0xc0009ffad0) Data frame received for 1\nI0725 10:35:03.787377     268 log.go:172] (0xc0009f4aa0) (1) Data frame handling\nI0725 10:35:03.787391     268 log.go:172] (0xc0009f4aa0) (1) Data frame sent\nI0725 10:35:03.787412     268 log.go:172] (0xc0009ffad0) (0xc0009f4aa0) Stream removed, broadcasting: 1\nI0725 10:35:03.787785     268 log.go:172] (0xc0009ffad0) (0xc0009f4aa0) Stream removed, broadcasting: 1\nI0725 10:35:03.787804     268 log.go:172] (0xc0009ffad0) (0xc0007dd680) Stream removed, broadcasting: 3\nI0725 10:35:03.787964     268 log.go:172] (0xc0009ffad0) (0xc000540aa0) Stream removed, broadcasting: 5\n"
Jul 25 10:35:03.793: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:35:03.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5994" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.520 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":10,"skipped":110,"failed":0}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:35:03.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Jul 25 10:35:03.922: INFO: Waiting up to 5m0s for pod "var-expansion-71e6a79b-6106-4362-bb7e-2586f35c2fa2" in namespace "var-expansion-2004" to be "Succeeded or Failed"
Jul 25 10:35:03.932: INFO: Pod "var-expansion-71e6a79b-6106-4362-bb7e-2586f35c2fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.675369ms
Jul 25 10:35:05.935: INFO: Pod "var-expansion-71e6a79b-6106-4362-bb7e-2586f35c2fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012817414s
Jul 25 10:35:07.939: INFO: Pod "var-expansion-71e6a79b-6106-4362-bb7e-2586f35c2fa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016954578s
STEP: Saw pod success
Jul 25 10:35:07.939: INFO: Pod "var-expansion-71e6a79b-6106-4362-bb7e-2586f35c2fa2" satisfied condition "Succeeded or Failed"
Jul 25 10:35:07.942: INFO: Trying to get logs from node kali-worker pod var-expansion-71e6a79b-6106-4362-bb7e-2586f35c2fa2 container dapi-container: 
STEP: delete the pod
Jul 25 10:35:07.978: INFO: Waiting for pod var-expansion-71e6a79b-6106-4362-bb7e-2586f35c2fa2 to disappear
Jul 25 10:35:07.986: INFO: Pod var-expansion-71e6a79b-6106-4362-bb7e-2586f35c2fa2 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:35:07.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2004" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":114,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:35:07.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 25 10:35:08.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7951'
Jul 25 10:35:08.201: INFO: stderr: ""
Jul 25 10:35:08.201: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jul 25 10:35:13.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7951 -o json'
Jul 25 10:35:13.357: INFO: stderr: ""
Jul 25 10:35:13.357: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-25T10:35:08Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-07-25T10:35:08Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.129\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-07-25T10:35:11Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-7951\",\n        \"resourceVersion\": \"4014671\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7951/pods/e2e-test-httpd-pod\",\n        \"uid\": \"27ec0b05-e75a-4956-922e-87b07aff6d1b\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-zbdk9\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-zbdk9\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-zbdk9\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-25T10:35:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-25T10:35:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-25T10:35:11Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-25T10:35:08Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://0c001255b1d3aa52ad7481732acc428b5db252ed762329223a3da199a2c95d0c\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-25T10:35:11Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.13\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.129\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.129\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-25T10:35:08Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul 25 10:35:13.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7951'
Jul 25 10:35:13.666: INFO: stderr: ""
Jul 25 10:35:13.666: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul 25 10:35:13.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7951'
Jul 25 10:35:17.494: INFO: stderr: ""
Jul 25 10:35:17.494: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:35:17.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7951" for this suite.

• [SLOW TEST:9.506 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":12,"skipped":158,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:35:17.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-2202e507-f820-47a8-a961-ed4d16ed0b24
STEP: Creating a pod to test consume configMaps
Jul 25 10:35:17.557: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e" in namespace "projected-7616" to be "Succeeded or Failed"
Jul 25 10:35:17.561: INFO: Pod "pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097782ms
Jul 25 10:35:19.584: INFO: Pod "pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027064359s
Jul 25 10:35:21.589: INFO: Pod "pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e": Phase="Running", Reason="", readiness=true. Elapsed: 4.031586753s
Jul 25 10:35:23.593: INFO: Pod "pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035633148s
STEP: Saw pod success
Jul 25 10:35:23.593: INFO: Pod "pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e" satisfied condition "Succeeded or Failed"
Jul 25 10:35:23.596: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e container projected-configmap-volume-test: 
STEP: delete the pod
Jul 25 10:35:23.706: INFO: Waiting for pod pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e to disappear
Jul 25 10:35:23.711: INFO: Pod pod-projected-configmaps-0826d14b-55da-4596-b535-8453400cc67e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:35:23.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7616" for this suite.

• [SLOW TEST:6.217 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":160,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:35:23.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-252c0919-5d98-42cc-b152-492b69312efb
STEP: Creating a pod to test consume configMaps
Jul 25 10:35:23.837: INFO: Waiting up to 5m0s for pod "pod-configmaps-15aa37a2-9619-4f62-9b31-0d91669227e6" in namespace "configmap-7313" to be "Succeeded or Failed"
Jul 25 10:35:23.849: INFO: Pod "pod-configmaps-15aa37a2-9619-4f62-9b31-0d91669227e6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.130192ms
Jul 25 10:35:25.854: INFO: Pod "pod-configmaps-15aa37a2-9619-4f62-9b31-0d91669227e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016657223s
Jul 25 10:35:27.858: INFO: Pod "pod-configmaps-15aa37a2-9619-4f62-9b31-0d91669227e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020948868s
STEP: Saw pod success
Jul 25 10:35:27.858: INFO: Pod "pod-configmaps-15aa37a2-9619-4f62-9b31-0d91669227e6" satisfied condition "Succeeded or Failed"
Jul 25 10:35:27.861: INFO: Trying to get logs from node kali-worker pod pod-configmaps-15aa37a2-9619-4f62-9b31-0d91669227e6 container configmap-volume-test: 
STEP: delete the pod
Jul 25 10:35:27.896: INFO: Waiting for pod pod-configmaps-15aa37a2-9619-4f62-9b31-0d91669227e6 to disappear
Jul 25 10:35:27.902: INFO: Pod pod-configmaps-15aa37a2-9619-4f62-9b31-0d91669227e6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:35:27.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7313" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":162,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:35:27.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:35:28.017: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul 25 10:35:33.021: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 25 10:35:33.021: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 25 10:35:33.063: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-9656 /apis/apps/v1/namespaces/deployment-9656/deployments/test-cleanup-deployment 1f06267c-0c10-41cb-9907-783cc8660294 4014875 1 2020-07-25 10:35:33 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2020-07-25 10:35:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00045d998  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jul 25 10:35:33.126: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-9656 /apis/apps/v1/namespaces/deployment-9656/replicasets/test-cleanup-deployment-b4867b47f 2f73edd3-1805-4ea3-a947-e26141fe27a4 4014877 1 2020-07-25 10:35:33 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 1f06267c-0c10-41cb-9907-783cc8660294 0xc00083e990 0xc00083e991}] []  [{kube-controller-manager Update apps/v1 2020-07-25 10:35:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 102 48 54 50 54 55 99 45 48 99 49 48 45 52 49 99 98 45 57 57 48 55 45 55 56 51 99 99 56 54 54 48 50 57 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00083ea98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 25 10:35:33.126: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jul 25 10:35:33.126: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-9656 /apis/apps/v1/namespaces/deployment-9656/replicasets/test-cleanup-controller 4beba34d-5b56-4efc-82cd-098547aa2baf 4014876 1 2020-07-25 10:35:27 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 1f06267c-0c10-41cb-9907-783cc8660294 0xc00045df57 0xc00045df58}] []  [{e2e.test Update apps/v1 2020-07-25 10:35:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-25 10:35:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 102 48 54 50 54 55 99 45 48 99 49 48 45 52 49 99 98 45 57 57 48 55 45 55 56 51 99 99 56 54 54 48 50 57 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00083e468  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 25 10:35:33.198: INFO: Pod "test-cleanup-controller-mzb9g" is available:
&Pod{ObjectMeta:{test-cleanup-controller-mzb9g test-cleanup-controller- deployment-9656 /api/v1/namespaces/deployment-9656/pods/test-cleanup-controller-mzb9g 5a66e9f8-3593-4626-86b1-3470806af2c0 4014865 0 2020-07-25 10:35:28 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 4beba34d-5b56-4efc-82cd-098547aa2baf 0xc00083f1c7 0xc00083f1c8}] []  [{kube-controller-manager Update v1 2020-07-25 10:35:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 101 98 97 51 52 100 45 53 98 53 54 45 52 101 102 99 45 56 50 99 100 45 48 57 56 53 52 55 97 97 50 98 97 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:35:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2f5d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2f5d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2f5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:35:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:35:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:35:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:35:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.247,StartTime:2020-07-25 10:35:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:35:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0b5f07aed2c0025546a029612a2871e537c638d416925b6e99bc65aa42d03915,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:35:33.198: INFO: Pod "test-cleanup-deployment-b4867b47f-bqtmj" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-bqtmj test-cleanup-deployment-b4867b47f- deployment-9656 /api/v1/namespaces/deployment-9656/pods/test-cleanup-deployment-b4867b47f-bqtmj 928a1447-01d3-45cf-ae9c-a6674375f587 4014882 0 2020-07-25 10:35:33 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 2f73edd3-1805-4ea3-a947-e26141fe27a4 0xc00083f3b0 0xc00083f3b1}] []  [{kube-controller-manager Update v1 2020-07-25 10:35:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 102 55 51 101 100 100 51 45 49 56 48 53 45 52 101 97 51 45 97 57 52 55 45 101 50 54 49 52 49 102 101 50 55 97 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2f5d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2f5d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2f5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:35:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:35:33.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9656" for this suite.

• [SLOW TEST:5.382 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":15,"skipped":170,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:35:33.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul 25 10:35:45.467: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:45.468: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:45.494205       7 log.go:172] (0xc0024b38c0) (0xc002ed1540) Create stream
I0725 10:35:45.494231       7 log.go:172] (0xc0024b38c0) (0xc002ed1540) Stream added, broadcasting: 1
I0725 10:35:45.496421       7 log.go:172] (0xc0024b38c0) Reply frame received for 1
I0725 10:35:45.496456       7 log.go:172] (0xc0024b38c0) (0xc002ed15e0) Create stream
I0725 10:35:45.496467       7 log.go:172] (0xc0024b38c0) (0xc002ed15e0) Stream added, broadcasting: 3
I0725 10:35:45.497832       7 log.go:172] (0xc0024b38c0) Reply frame received for 3
I0725 10:35:45.497868       7 log.go:172] (0xc0024b38c0) (0xc00296b540) Create stream
I0725 10:35:45.497881       7 log.go:172] (0xc0024b38c0) (0xc00296b540) Stream added, broadcasting: 5
I0725 10:35:45.498726       7 log.go:172] (0xc0024b38c0) Reply frame received for 5
I0725 10:35:45.581864       7 log.go:172] (0xc0024b38c0) Data frame received for 5
I0725 10:35:45.581910       7 log.go:172] (0xc00296b540) (5) Data frame handling
I0725 10:35:45.581935       7 log.go:172] (0xc0024b38c0) Data frame received for 3
I0725 10:35:45.581952       7 log.go:172] (0xc002ed15e0) (3) Data frame handling
I0725 10:35:45.581976       7 log.go:172] (0xc002ed15e0) (3) Data frame sent
I0725 10:35:45.581992       7 log.go:172] (0xc0024b38c0) Data frame received for 3
I0725 10:35:45.582000       7 log.go:172] (0xc002ed15e0) (3) Data frame handling
I0725 10:35:45.583302       7 log.go:172] (0xc0024b38c0) Data frame received for 1
I0725 10:35:45.583323       7 log.go:172] (0xc002ed1540) (1) Data frame handling
I0725 10:35:45.583335       7 log.go:172] (0xc002ed1540) (1) Data frame sent
I0725 10:35:45.583343       7 log.go:172] (0xc0024b38c0) (0xc002ed1540) Stream removed, broadcasting: 1
I0725 10:35:45.583390       7 log.go:172] (0xc0024b38c0) Go away received
I0725 10:35:45.583681       7 log.go:172] (0xc0024b38c0) (0xc002ed1540) Stream removed, broadcasting: 1
I0725 10:35:45.583699       7 log.go:172] (0xc0024b38c0) (0xc002ed15e0) Stream removed, broadcasting: 3
I0725 10:35:45.583707       7 log.go:172] (0xc0024b38c0) (0xc00296b540) Stream removed, broadcasting: 5
Jul 25 10:35:45.583: INFO: Exec stderr: ""
Jul 25 10:35:45.583: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:45.583: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:45.608327       7 log.go:172] (0xc0024b3ef0) (0xc002ed17c0) Create stream
I0725 10:35:45.608354       7 log.go:172] (0xc0024b3ef0) (0xc002ed17c0) Stream added, broadcasting: 1
I0725 10:35:45.610975       7 log.go:172] (0xc0024b3ef0) Reply frame received for 1
I0725 10:35:45.611034       7 log.go:172] (0xc0024b3ef0) (0xc00296b5e0) Create stream
I0725 10:35:45.611051       7 log.go:172] (0xc0024b3ef0) (0xc00296b5e0) Stream added, broadcasting: 3
I0725 10:35:45.611984       7 log.go:172] (0xc0024b3ef0) Reply frame received for 3
I0725 10:35:45.612018       7 log.go:172] (0xc0024b3ef0) (0xc001f905a0) Create stream
I0725 10:35:45.612033       7 log.go:172] (0xc0024b3ef0) (0xc001f905a0) Stream added, broadcasting: 5
I0725 10:35:45.613042       7 log.go:172] (0xc0024b3ef0) Reply frame received for 5
I0725 10:35:45.683994       7 log.go:172] (0xc0024b3ef0) Data frame received for 3
I0725 10:35:45.684028       7 log.go:172] (0xc00296b5e0) (3) Data frame handling
I0725 10:35:45.684050       7 log.go:172] (0xc00296b5e0) (3) Data frame sent
I0725 10:35:45.684067       7 log.go:172] (0xc0024b3ef0) Data frame received for 3
I0725 10:35:45.684090       7 log.go:172] (0xc00296b5e0) (3) Data frame handling
I0725 10:35:45.684114       7 log.go:172] (0xc0024b3ef0) Data frame received for 5
I0725 10:35:45.684135       7 log.go:172] (0xc001f905a0) (5) Data frame handling
I0725 10:35:45.685843       7 log.go:172] (0xc0024b3ef0) Data frame received for 1
I0725 10:35:45.685866       7 log.go:172] (0xc002ed17c0) (1) Data frame handling
I0725 10:35:45.685874       7 log.go:172] (0xc002ed17c0) (1) Data frame sent
I0725 10:35:45.685883       7 log.go:172] (0xc0024b3ef0) (0xc002ed17c0) Stream removed, broadcasting: 1
I0725 10:35:45.685892       7 log.go:172] (0xc0024b3ef0) Go away received
I0725 10:35:45.686005       7 log.go:172] (0xc0024b3ef0) (0xc002ed17c0) Stream removed, broadcasting: 1
I0725 10:35:45.686024       7 log.go:172] (0xc0024b3ef0) (0xc00296b5e0) Stream removed, broadcasting: 3
I0725 10:35:45.686034       7 log.go:172] (0xc0024b3ef0) (0xc001f905a0) Stream removed, broadcasting: 5
Jul 25 10:35:45.686: INFO: Exec stderr: ""
Jul 25 10:35:45.686: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:45.686: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:45.719479       7 log.go:172] (0xc002720210) (0xc001f908c0) Create stream
I0725 10:35:45.719505       7 log.go:172] (0xc002720210) (0xc001f908c0) Stream added, broadcasting: 1
I0725 10:35:45.724115       7 log.go:172] (0xc002720210) Reply frame received for 1
I0725 10:35:45.724191       7 log.go:172] (0xc002720210) (0xc001f90960) Create stream
I0725 10:35:45.724231       7 log.go:172] (0xc002720210) (0xc001f90960) Stream added, broadcasting: 3
I0725 10:35:45.727390       7 log.go:172] (0xc002720210) Reply frame received for 3
I0725 10:35:45.727425       7 log.go:172] (0xc002720210) (0xc00296b680) Create stream
I0725 10:35:45.727438       7 log.go:172] (0xc002720210) (0xc00296b680) Stream added, broadcasting: 5
I0725 10:35:45.728505       7 log.go:172] (0xc002720210) Reply frame received for 5
I0725 10:35:45.783509       7 log.go:172] (0xc002720210) Data frame received for 5
I0725 10:35:45.783533       7 log.go:172] (0xc00296b680) (5) Data frame handling
I0725 10:35:45.783603       7 log.go:172] (0xc002720210) Data frame received for 3
I0725 10:35:45.783656       7 log.go:172] (0xc001f90960) (3) Data frame handling
I0725 10:35:45.783686       7 log.go:172] (0xc001f90960) (3) Data frame sent
I0725 10:35:45.783710       7 log.go:172] (0xc002720210) Data frame received for 3
I0725 10:35:45.783727       7 log.go:172] (0xc001f90960) (3) Data frame handling
I0725 10:35:45.785216       7 log.go:172] (0xc002720210) Data frame received for 1
I0725 10:35:45.785261       7 log.go:172] (0xc001f908c0) (1) Data frame handling
I0725 10:35:45.785308       7 log.go:172] (0xc001f908c0) (1) Data frame sent
I0725 10:35:45.785336       7 log.go:172] (0xc002720210) (0xc001f908c0) Stream removed, broadcasting: 1
I0725 10:35:45.785363       7 log.go:172] (0xc002720210) Go away received
I0725 10:35:45.785543       7 log.go:172] (0xc002720210) (0xc001f908c0) Stream removed, broadcasting: 1
I0725 10:35:45.785576       7 log.go:172] (0xc002720210) (0xc001f90960) Stream removed, broadcasting: 3
I0725 10:35:45.785602       7 log.go:172] (0xc002720210) (0xc00296b680) Stream removed, broadcasting: 5
Jul 25 10:35:45.785: INFO: Exec stderr: ""
Jul 25 10:35:45.785: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:45.785: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:45.822187       7 log.go:172] (0xc00220eb00) (0xc002dfd860) Create stream
I0725 10:35:45.822216       7 log.go:172] (0xc00220eb00) (0xc002dfd860) Stream added, broadcasting: 1
I0725 10:35:45.824984       7 log.go:172] (0xc00220eb00) Reply frame received for 1
I0725 10:35:45.825024       7 log.go:172] (0xc00220eb00) (0xc002dfd900) Create stream
I0725 10:35:45.825037       7 log.go:172] (0xc00220eb00) (0xc002dfd900) Stream added, broadcasting: 3
I0725 10:35:45.825951       7 log.go:172] (0xc00220eb00) Reply frame received for 3
I0725 10:35:45.825986       7 log.go:172] (0xc00220eb00) (0xc001f90b40) Create stream
I0725 10:35:45.826000       7 log.go:172] (0xc00220eb00) (0xc001f90b40) Stream added, broadcasting: 5
I0725 10:35:45.826789       7 log.go:172] (0xc00220eb00) Reply frame received for 5
I0725 10:35:45.890916       7 log.go:172] (0xc00220eb00) Data frame received for 5
I0725 10:35:45.890941       7 log.go:172] (0xc001f90b40) (5) Data frame handling
I0725 10:35:45.890971       7 log.go:172] (0xc00220eb00) Data frame received for 3
I0725 10:35:45.891010       7 log.go:172] (0xc002dfd900) (3) Data frame handling
I0725 10:35:45.891046       7 log.go:172] (0xc002dfd900) (3) Data frame sent
I0725 10:35:45.891069       7 log.go:172] (0xc00220eb00) Data frame received for 3
I0725 10:35:45.891088       7 log.go:172] (0xc002dfd900) (3) Data frame handling
I0725 10:35:45.892634       7 log.go:172] (0xc00220eb00) Data frame received for 1
I0725 10:35:45.892658       7 log.go:172] (0xc002dfd860) (1) Data frame handling
I0725 10:35:45.892674       7 log.go:172] (0xc002dfd860) (1) Data frame sent
I0725 10:35:45.892706       7 log.go:172] (0xc00220eb00) (0xc002dfd860) Stream removed, broadcasting: 1
I0725 10:35:45.892846       7 log.go:172] (0xc00220eb00) Go away received
I0725 10:35:45.892946       7 log.go:172] (0xc00220eb00) (0xc002dfd860) Stream removed, broadcasting: 1
I0725 10:35:45.892971       7 log.go:172] (0xc00220eb00) (0xc002dfd900) Stream removed, broadcasting: 3
I0725 10:35:45.892980       7 log.go:172] (0xc00220eb00) (0xc001f90b40) Stream removed, broadcasting: 5
Jul 25 10:35:45.892: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul 25 10:35:45.893: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:45.893: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:45.929286       7 log.go:172] (0xc0021b6630) (0xc002ed19a0) Create stream
I0725 10:35:45.929309       7 log.go:172] (0xc0021b6630) (0xc002ed19a0) Stream added, broadcasting: 1
I0725 10:35:45.931912       7 log.go:172] (0xc0021b6630) Reply frame received for 1
I0725 10:35:45.931952       7 log.go:172] (0xc0021b6630) (0xc001f90be0) Create stream
I0725 10:35:45.931968       7 log.go:172] (0xc0021b6630) (0xc001f90be0) Stream added, broadcasting: 3
I0725 10:35:45.933357       7 log.go:172] (0xc0021b6630) Reply frame received for 3
I0725 10:35:45.933437       7 log.go:172] (0xc0021b6630) (0xc0016985a0) Create stream
I0725 10:35:45.933465       7 log.go:172] (0xc0021b6630) (0xc0016985a0) Stream added, broadcasting: 5
I0725 10:35:45.934679       7 log.go:172] (0xc0021b6630) Reply frame received for 5
I0725 10:35:45.986731       7 log.go:172] (0xc0021b6630) Data frame received for 5
I0725 10:35:45.986782       7 log.go:172] (0xc0016985a0) (5) Data frame handling
I0725 10:35:45.986806       7 log.go:172] (0xc0021b6630) Data frame received for 3
I0725 10:35:45.986821       7 log.go:172] (0xc001f90be0) (3) Data frame handling
I0725 10:35:45.986835       7 log.go:172] (0xc001f90be0) (3) Data frame sent
I0725 10:35:45.986847       7 log.go:172] (0xc0021b6630) Data frame received for 3
I0725 10:35:45.986858       7 log.go:172] (0xc001f90be0) (3) Data frame handling
I0725 10:35:45.988245       7 log.go:172] (0xc0021b6630) Data frame received for 1
I0725 10:35:45.988271       7 log.go:172] (0xc002ed19a0) (1) Data frame handling
I0725 10:35:45.988284       7 log.go:172] (0xc002ed19a0) (1) Data frame sent
I0725 10:35:45.988310       7 log.go:172] (0xc0021b6630) (0xc002ed19a0) Stream removed, broadcasting: 1
I0725 10:35:45.988332       7 log.go:172] (0xc0021b6630) Go away received
I0725 10:35:45.988457       7 log.go:172] (0xc0021b6630) (0xc002ed19a0) Stream removed, broadcasting: 1
I0725 10:35:45.988485       7 log.go:172] (0xc0021b6630) (0xc001f90be0) Stream removed, broadcasting: 3
I0725 10:35:45.988507       7 log.go:172] (0xc0021b6630) (0xc0016985a0) Stream removed, broadcasting: 5
Jul 25 10:35:45.988: INFO: Exec stderr: ""
Jul 25 10:35:45.988: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:45.988: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:46.021902       7 log.go:172] (0xc00220f130) (0xc002dfdae0) Create stream
I0725 10:35:46.021927       7 log.go:172] (0xc00220f130) (0xc002dfdae0) Stream added, broadcasting: 1
I0725 10:35:46.024524       7 log.go:172] (0xc00220f130) Reply frame received for 1
I0725 10:35:46.024567       7 log.go:172] (0xc00220f130) (0xc0023f0000) Create stream
I0725 10:35:46.024588       7 log.go:172] (0xc00220f130) (0xc0023f0000) Stream added, broadcasting: 3
I0725 10:35:46.025694       7 log.go:172] (0xc00220f130) Reply frame received for 3
I0725 10:35:46.025737       7 log.go:172] (0xc00220f130) (0xc00296b720) Create stream
I0725 10:35:46.025752       7 log.go:172] (0xc00220f130) (0xc00296b720) Stream added, broadcasting: 5
I0725 10:35:46.026646       7 log.go:172] (0xc00220f130) Reply frame received for 5
I0725 10:35:46.105754       7 log.go:172] (0xc00220f130) Data frame received for 5
I0725 10:35:46.105813       7 log.go:172] (0xc00296b720) (5) Data frame handling
I0725 10:35:46.105841       7 log.go:172] (0xc00220f130) Data frame received for 3
I0725 10:35:46.105849       7 log.go:172] (0xc0023f0000) (3) Data frame handling
I0725 10:35:46.105870       7 log.go:172] (0xc0023f0000) (3) Data frame sent
I0725 10:35:46.105884       7 log.go:172] (0xc00220f130) Data frame received for 3
I0725 10:35:46.105899       7 log.go:172] (0xc0023f0000) (3) Data frame handling
I0725 10:35:46.106936       7 log.go:172] (0xc00220f130) Data frame received for 1
I0725 10:35:46.106954       7 log.go:172] (0xc002dfdae0) (1) Data frame handling
I0725 10:35:46.106977       7 log.go:172] (0xc002dfdae0) (1) Data frame sent
I0725 10:35:46.107002       7 log.go:172] (0xc00220f130) (0xc002dfdae0) Stream removed, broadcasting: 1
I0725 10:35:46.107053       7 log.go:172] (0xc00220f130) (0xc002dfdae0) Stream removed, broadcasting: 1
I0725 10:35:46.107065       7 log.go:172] (0xc00220f130) (0xc0023f0000) Stream removed, broadcasting: 3
I0725 10:35:46.107215       7 log.go:172] (0xc00220f130) Go away received
I0725 10:35:46.107270       7 log.go:172] (0xc00220f130) (0xc00296b720) Stream removed, broadcasting: 5
Jul 25 10:35:46.107: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul 25 10:35:46.107: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:46.107: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:46.135426       7 log.go:172] (0xc0021b6bb0) (0xc002ed1ae0) Create stream
I0725 10:35:46.135462       7 log.go:172] (0xc0021b6bb0) (0xc002ed1ae0) Stream added, broadcasting: 1
I0725 10:35:46.138619       7 log.go:172] (0xc0021b6bb0) Reply frame received for 1
I0725 10:35:46.138665       7 log.go:172] (0xc0021b6bb0) (0xc00296b7c0) Create stream
I0725 10:35:46.138679       7 log.go:172] (0xc0021b6bb0) (0xc00296b7c0) Stream added, broadcasting: 3
I0725 10:35:46.139610       7 log.go:172] (0xc0021b6bb0) Reply frame received for 3
I0725 10:35:46.139650       7 log.go:172] (0xc0021b6bb0) (0xc00296b860) Create stream
I0725 10:35:46.139664       7 log.go:172] (0xc0021b6bb0) (0xc00296b860) Stream added, broadcasting: 5
I0725 10:35:46.140569       7 log.go:172] (0xc0021b6bb0) Reply frame received for 5
I0725 10:35:46.216116       7 log.go:172] (0xc0021b6bb0) Data frame received for 5
I0725 10:35:46.216163       7 log.go:172] (0xc00296b860) (5) Data frame handling
I0725 10:35:46.216190       7 log.go:172] (0xc0021b6bb0) Data frame received for 3
I0725 10:35:46.216202       7 log.go:172] (0xc00296b7c0) (3) Data frame handling
I0725 10:35:46.216213       7 log.go:172] (0xc00296b7c0) (3) Data frame sent
I0725 10:35:46.216225       7 log.go:172] (0xc0021b6bb0) Data frame received for 3
I0725 10:35:46.216241       7 log.go:172] (0xc00296b7c0) (3) Data frame handling
I0725 10:35:46.218030       7 log.go:172] (0xc0021b6bb0) Data frame received for 1
I0725 10:35:46.218059       7 log.go:172] (0xc002ed1ae0) (1) Data frame handling
I0725 10:35:46.218080       7 log.go:172] (0xc002ed1ae0) (1) Data frame sent
I0725 10:35:46.218098       7 log.go:172] (0xc0021b6bb0) (0xc002ed1ae0) Stream removed, broadcasting: 1
I0725 10:35:46.218132       7 log.go:172] (0xc0021b6bb0) Go away received
I0725 10:35:46.218231       7 log.go:172] (0xc0021b6bb0) (0xc002ed1ae0) Stream removed, broadcasting: 1
I0725 10:35:46.218248       7 log.go:172] (0xc0021b6bb0) (0xc00296b7c0) Stream removed, broadcasting: 3
I0725 10:35:46.218257       7 log.go:172] (0xc0021b6bb0) (0xc00296b860) Stream removed, broadcasting: 5
Jul 25 10:35:46.218: INFO: Exec stderr: ""
Jul 25 10:35:46.218: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:46.218: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:46.242878       7 log.go:172] (0xc0030fc8f0) (0xc00296bb80) Create stream
I0725 10:35:46.242911       7 log.go:172] (0xc0030fc8f0) (0xc00296bb80) Stream added, broadcasting: 1
I0725 10:35:46.245083       7 log.go:172] (0xc0030fc8f0) Reply frame received for 1
I0725 10:35:46.245118       7 log.go:172] (0xc0030fc8f0) (0xc002dfdc20) Create stream
I0725 10:35:46.245131       7 log.go:172] (0xc0030fc8f0) (0xc002dfdc20) Stream added, broadcasting: 3
I0725 10:35:46.245988       7 log.go:172] (0xc0030fc8f0) Reply frame received for 3
I0725 10:35:46.246031       7 log.go:172] (0xc0030fc8f0) (0xc0023f00a0) Create stream
I0725 10:35:46.246042       7 log.go:172] (0xc0030fc8f0) (0xc0023f00a0) Stream added, broadcasting: 5
I0725 10:35:46.246715       7 log.go:172] (0xc0030fc8f0) Reply frame received for 5
I0725 10:35:46.321185       7 log.go:172] (0xc0030fc8f0) Data frame received for 3
I0725 10:35:46.321228       7 log.go:172] (0xc002dfdc20) (3) Data frame handling
I0725 10:35:46.321246       7 log.go:172] (0xc002dfdc20) (3) Data frame sent
I0725 10:35:46.321259       7 log.go:172] (0xc0030fc8f0) Data frame received for 3
I0725 10:35:46.321284       7 log.go:172] (0xc002dfdc20) (3) Data frame handling
I0725 10:35:46.321326       7 log.go:172] (0xc0030fc8f0) Data frame received for 5
I0725 10:35:46.321365       7 log.go:172] (0xc0023f00a0) (5) Data frame handling
I0725 10:35:46.322989       7 log.go:172] (0xc0030fc8f0) Data frame received for 1
I0725 10:35:46.323012       7 log.go:172] (0xc00296bb80) (1) Data frame handling
I0725 10:35:46.323022       7 log.go:172] (0xc00296bb80) (1) Data frame sent
I0725 10:35:46.323034       7 log.go:172] (0xc0030fc8f0) (0xc00296bb80) Stream removed, broadcasting: 1
I0725 10:35:46.323074       7 log.go:172] (0xc0030fc8f0) Go away received
I0725 10:35:46.323102       7 log.go:172] (0xc0030fc8f0) (0xc00296bb80) Stream removed, broadcasting: 1
I0725 10:35:46.323114       7 log.go:172] (0xc0030fc8f0) (0xc002dfdc20) Stream removed, broadcasting: 3
I0725 10:35:46.323123       7 log.go:172] (0xc0030fc8f0) (0xc0023f00a0) Stream removed, broadcasting: 5
Jul 25 10:35:46.323: INFO: Exec stderr: ""
Jul 25 10:35:46.323: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:46.323: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:46.353843       7 log.go:172] (0xc0021b71e0) (0xc002ed1d60) Create stream
I0725 10:35:46.353879       7 log.go:172] (0xc0021b71e0) (0xc002ed1d60) Stream added, broadcasting: 1
I0725 10:35:46.357941       7 log.go:172] (0xc0021b71e0) Reply frame received for 1
I0725 10:35:46.357982       7 log.go:172] (0xc0021b71e0) (0xc002dfdcc0) Create stream
I0725 10:35:46.357998       7 log.go:172] (0xc0021b71e0) (0xc002dfdcc0) Stream added, broadcasting: 3
I0725 10:35:46.359011       7 log.go:172] (0xc0021b71e0) Reply frame received for 3
I0725 10:35:46.359044       7 log.go:172] (0xc0021b71e0) (0xc00296bc20) Create stream
I0725 10:35:46.359053       7 log.go:172] (0xc0021b71e0) (0xc00296bc20) Stream added, broadcasting: 5
I0725 10:35:46.359910       7 log.go:172] (0xc0021b71e0) Reply frame received for 5
I0725 10:35:46.415900       7 log.go:172] (0xc0021b71e0) Data frame received for 3
I0725 10:35:46.415930       7 log.go:172] (0xc002dfdcc0) (3) Data frame handling
I0725 10:35:46.415939       7 log.go:172] (0xc002dfdcc0) (3) Data frame sent
I0725 10:35:46.415946       7 log.go:172] (0xc0021b71e0) Data frame received for 3
I0725 10:35:46.415953       7 log.go:172] (0xc002dfdcc0) (3) Data frame handling
I0725 10:35:46.415965       7 log.go:172] (0xc0021b71e0) Data frame received for 5
I0725 10:35:46.415976       7 log.go:172] (0xc00296bc20) (5) Data frame handling
I0725 10:35:46.417817       7 log.go:172] (0xc0021b71e0) Data frame received for 1
I0725 10:35:46.417848       7 log.go:172] (0xc002ed1d60) (1) Data frame handling
I0725 10:35:46.417864       7 log.go:172] (0xc002ed1d60) (1) Data frame sent
I0725 10:35:46.417879       7 log.go:172] (0xc0021b71e0) (0xc002ed1d60) Stream removed, broadcasting: 1
I0725 10:35:46.417898       7 log.go:172] (0xc0021b71e0) Go away received
I0725 10:35:46.418014       7 log.go:172] (0xc0021b71e0) (0xc002ed1d60) Stream removed, broadcasting: 1
I0725 10:35:46.418039       7 log.go:172] (0xc0021b71e0) (0xc002dfdcc0) Stream removed, broadcasting: 3
I0725 10:35:46.418053       7 log.go:172] (0xc0021b71e0) (0xc00296bc20) Stream removed, broadcasting: 5
Jul 25 10:35:46.418: INFO: Exec stderr: ""
Jul 25 10:35:46.418: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4834 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:35:46.418: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:35:46.450934       7 log.go:172] (0xc00220f810) (0xc002dfdea0) Create stream
I0725 10:35:46.450962       7 log.go:172] (0xc00220f810) (0xc002dfdea0) Stream added, broadcasting: 1
I0725 10:35:46.457267       7 log.go:172] (0xc00220f810) Reply frame received for 1
I0725 10:35:46.457331       7 log.go:172] (0xc00220f810) (0xc0018e0000) Create stream
I0725 10:35:46.457357       7 log.go:172] (0xc00220f810) (0xc0018e0000) Stream added, broadcasting: 3
I0725 10:35:46.459754       7 log.go:172] (0xc00220f810) Reply frame received for 3
I0725 10:35:46.459793       7 log.go:172] (0xc00220f810) (0xc002dfdf40) Create stream
I0725 10:35:46.459808       7 log.go:172] (0xc00220f810) (0xc002dfdf40) Stream added, broadcasting: 5
I0725 10:35:46.461976       7 log.go:172] (0xc00220f810) Reply frame received for 5
I0725 10:35:46.514840       7 log.go:172] (0xc00220f810) Data frame received for 5
I0725 10:35:46.514903       7 log.go:172] (0xc002dfdf40) (5) Data frame handling
I0725 10:35:46.514950       7 log.go:172] (0xc00220f810) Data frame received for 3
I0725 10:35:46.515095       7 log.go:172] (0xc0018e0000) (3) Data frame handling
I0725 10:35:46.515185       7 log.go:172] (0xc0018e0000) (3) Data frame sent
I0725 10:35:46.515218       7 log.go:172] (0xc00220f810) Data frame received for 3
I0725 10:35:46.515246       7 log.go:172] (0xc0018e0000) (3) Data frame handling
I0725 10:35:46.516328       7 log.go:172] (0xc00220f810) Data frame received for 1
I0725 10:35:46.516357       7 log.go:172] (0xc002dfdea0) (1) Data frame handling
I0725 10:35:46.516393       7 log.go:172] (0xc002dfdea0) (1) Data frame sent
I0725 10:35:46.516435       7 log.go:172] (0xc00220f810) (0xc002dfdea0) Stream removed, broadcasting: 1
I0725 10:35:46.516525       7 log.go:172] (0xc00220f810) (0xc002dfdea0) Stream removed, broadcasting: 1
I0725 10:35:46.516548       7 log.go:172] (0xc00220f810) (0xc0018e0000) Stream removed, broadcasting: 3
I0725 10:35:46.516622       7 log.go:172] (0xc00220f810) Go away received
I0725 10:35:46.516682       7 log.go:172] (0xc00220f810) (0xc002dfdf40) Stream removed, broadcasting: 5
Jul 25 10:35:46.516: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:35:46.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4834" for this suite.

• [SLOW TEST:13.234 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":188,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:35:46.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-17d35d17-29c9-444e-9529-baf763cb21c1 in namespace container-probe-4800
Jul 25 10:35:50.650: INFO: Started pod liveness-17d35d17-29c9-444e-9529-baf763cb21c1 in namespace container-probe-4800
STEP: checking the pod's current state and verifying that restartCount is present
Jul 25 10:35:50.654: INFO: Initial restart count of pod liveness-17d35d17-29c9-444e-9529-baf763cb21c1 is 0
Jul 25 10:36:04.733: INFO: Restart count of pod container-probe-4800/liveness-17d35d17-29c9-444e-9529-baf763cb21c1 is now 1 (14.079825221s elapsed)
Jul 25 10:36:26.846: INFO: Restart count of pod container-probe-4800/liveness-17d35d17-29c9-444e-9529-baf763cb21c1 is now 2 (36.192596619s elapsed)
Jul 25 10:36:44.889: INFO: Restart count of pod container-probe-4800/liveness-17d35d17-29c9-444e-9529-baf763cb21c1 is now 3 (54.235822862s elapsed)
Jul 25 10:37:04.942: INFO: Restart count of pod container-probe-4800/liveness-17d35d17-29c9-444e-9529-baf763cb21c1 is now 4 (1m14.288714329s elapsed)
Jul 25 10:38:11.170: INFO: Restart count of pod container-probe-4800/liveness-17d35d17-29c9-444e-9529-baf763cb21c1 is now 5 (2m20.516559354s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:38:11.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4800" for this suite.

• [SLOW TEST:144.733 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":250,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:38:11.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jul 25 10:38:11.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:38:25.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7500" for this suite.

• [SLOW TEST:14.609 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":18,"skipped":329,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:38:25.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:38:26.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-476" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":19,"skipped":395,"failed":0}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:38:26.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-63d9af36-a6d9-4500-a39d-030867b6dd2a
STEP: Creating a pod to test consume secrets
Jul 25 10:38:26.388: INFO: Waiting up to 5m0s for pod "pod-secrets-d91e831a-85ed-4ca6-a41b-51059efa6974" in namespace "secrets-1580" to be "Succeeded or Failed"
Jul 25 10:38:26.402: INFO: Pod "pod-secrets-d91e831a-85ed-4ca6-a41b-51059efa6974": Phase="Pending", Reason="", readiness=false. Elapsed: 13.797807ms
Jul 25 10:38:28.562: INFO: Pod "pod-secrets-d91e831a-85ed-4ca6-a41b-51059efa6974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173893904s
Jul 25 10:38:30.574: INFO: Pod "pod-secrets-d91e831a-85ed-4ca6-a41b-51059efa6974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186110934s
STEP: Saw pod success
Jul 25 10:38:30.574: INFO: Pod "pod-secrets-d91e831a-85ed-4ca6-a41b-51059efa6974" satisfied condition "Succeeded or Failed"
Jul 25 10:38:30.578: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-d91e831a-85ed-4ca6-a41b-51059efa6974 container secret-volume-test: 
STEP: delete the pod
Jul 25 10:38:30.655: INFO: Waiting for pod pod-secrets-d91e831a-85ed-4ca6-a41b-51059efa6974 to disappear
Jul 25 10:38:30.668: INFO: Pod pod-secrets-d91e831a-85ed-4ca6-a41b-51059efa6974 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:38:30.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1580" for this suite.
STEP: Destroying namespace "secret-namespace-6817" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":401,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:38:30.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:38:31.208: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul 25 10:38:31.286: INFO: Number of nodes with available pods: 0
Jul 25 10:38:31.286: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul 25 10:38:31.333: INFO: Number of nodes with available pods: 0
Jul 25 10:38:31.333: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:32.338: INFO: Number of nodes with available pods: 0
Jul 25 10:38:32.338: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:33.358: INFO: Number of nodes with available pods: 0
Jul 25 10:38:33.358: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:34.340: INFO: Number of nodes with available pods: 0
Jul 25 10:38:34.340: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:35.336: INFO: Number of nodes with available pods: 1
Jul 25 10:38:35.336: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul 25 10:38:35.404: INFO: Number of nodes with available pods: 1
Jul 25 10:38:35.404: INFO: Number of running nodes: 0, number of available pods: 1
Jul 25 10:38:36.417: INFO: Number of nodes with available pods: 0
Jul 25 10:38:36.417: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul 25 10:38:36.466: INFO: Number of nodes with available pods: 0
Jul 25 10:38:36.466: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:37.562: INFO: Number of nodes with available pods: 0
Jul 25 10:38:37.562: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:38.508: INFO: Number of nodes with available pods: 0
Jul 25 10:38:38.508: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:39.470: INFO: Number of nodes with available pods: 0
Jul 25 10:38:39.471: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:40.470: INFO: Number of nodes with available pods: 0
Jul 25 10:38:40.471: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:41.526: INFO: Number of nodes with available pods: 0
Jul 25 10:38:41.526: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:42.514: INFO: Number of nodes with available pods: 0
Jul 25 10:38:42.514: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:43.471: INFO: Number of nodes with available pods: 0
Jul 25 10:38:43.471: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:38:44.471: INFO: Number of nodes with available pods: 1
Jul 25 10:38:44.471: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1377, will wait for the garbage collector to delete the pods
Jul 25 10:38:44.536: INFO: Deleting DaemonSet.extensions daemon-set took: 6.021773ms
Jul 25 10:38:44.836: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.305133ms
Jul 25 10:38:53.340: INFO: Number of nodes with available pods: 0
Jul 25 10:38:53.340: INFO: Number of running nodes: 0, number of available pods: 0
Jul 25 10:38:53.347: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1377/daemonsets","resourceVersion":"4015954"},"items":null}

Jul 25 10:38:53.350: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1377/pods","resourceVersion":"4015954"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:38:53.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1377" for this suite.

• [SLOW TEST:22.723 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":21,"skipped":467,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:38:53.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 25 10:38:53.565: INFO: Waiting up to 5m0s for pod "pod-d7c332c4-df45-42ec-a054-762dbaac27da" in namespace "emptydir-9967" to be "Succeeded or Failed"
Jul 25 10:38:53.576: INFO: Pod "pod-d7c332c4-df45-42ec-a054-762dbaac27da": Phase="Pending", Reason="", readiness=false. Elapsed: 11.196377ms
Jul 25 10:38:55.580: INFO: Pod "pod-d7c332c4-df45-42ec-a054-762dbaac27da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015453646s
Jul 25 10:38:57.584: INFO: Pod "pod-d7c332c4-df45-42ec-a054-762dbaac27da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019161083s
STEP: Saw pod success
Jul 25 10:38:57.584: INFO: Pod "pod-d7c332c4-df45-42ec-a054-762dbaac27da" satisfied condition "Succeeded or Failed"
Jul 25 10:38:57.587: INFO: Trying to get logs from node kali-worker2 pod pod-d7c332c4-df45-42ec-a054-762dbaac27da container test-container: 
STEP: delete the pod
Jul 25 10:38:57.622: INFO: Waiting for pod pod-d7c332c4-df45-42ec-a054-762dbaac27da to disappear
Jul 25 10:38:57.627: INFO: Pod pod-d7c332c4-df45-42ec-a054-762dbaac27da no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:38:57.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9967" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":467,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:38:57.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:39:01.830: INFO: Waiting up to 5m0s for pod "client-envvars-80cd6220-803a-4405-8d80-7046bab679f6" in namespace "pods-8364" to be "Succeeded or Failed"
Jul 25 10:39:01.860: INFO: Pod "client-envvars-80cd6220-803a-4405-8d80-7046bab679f6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.732468ms
Jul 25 10:39:03.865: INFO: Pod "client-envvars-80cd6220-803a-4405-8d80-7046bab679f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034271916s
Jul 25 10:39:05.869: INFO: Pod "client-envvars-80cd6220-803a-4405-8d80-7046bab679f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038853637s
STEP: Saw pod success
Jul 25 10:39:05.869: INFO: Pod "client-envvars-80cd6220-803a-4405-8d80-7046bab679f6" satisfied condition "Succeeded or Failed"
Jul 25 10:39:05.873: INFO: Trying to get logs from node kali-worker pod client-envvars-80cd6220-803a-4405-8d80-7046bab679f6 container env3cont: 
STEP: delete the pod
Jul 25 10:39:05.911: INFO: Waiting for pod client-envvars-80cd6220-803a-4405-8d80-7046bab679f6 to disappear
Jul 25 10:39:05.915: INFO: Pod client-envvars-80cd6220-803a-4405-8d80-7046bab679f6 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:39:05.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8364" for this suite.

• [SLOW TEST:8.286 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":468,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:39:05.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 10:39:06.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5058a9a6-9c19-4518-894d-733da5b06d1d" in namespace "downward-api-9096" to be "Succeeded or Failed"
Jul 25 10:39:06.035: INFO: Pod "downwardapi-volume-5058a9a6-9c19-4518-894d-733da5b06d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.894399ms
Jul 25 10:39:08.039: INFO: Pod "downwardapi-volume-5058a9a6-9c19-4518-894d-733da5b06d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009990011s
Jul 25 10:39:10.044: INFO: Pod "downwardapi-volume-5058a9a6-9c19-4518-894d-733da5b06d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014404553s
STEP: Saw pod success
Jul 25 10:39:10.044: INFO: Pod "downwardapi-volume-5058a9a6-9c19-4518-894d-733da5b06d1d" satisfied condition "Succeeded or Failed"
Jul 25 10:39:10.048: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-5058a9a6-9c19-4518-894d-733da5b06d1d container client-container: 
STEP: delete the pod
Jul 25 10:39:10.078: INFO: Waiting for pod downwardapi-volume-5058a9a6-9c19-4518-894d-733da5b06d1d to disappear
Jul 25 10:39:10.095: INFO: Pod downwardapi-volume-5058a9a6-9c19-4518-894d-733da5b06d1d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:39:10.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9096" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":557,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:39:10.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Jul 25 10:39:10.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-215'
Jul 25 10:39:10.525: INFO: stderr: ""
Jul 25 10:39:10.525: INFO: stdout: "pod/pause created\n"
Jul 25 10:39:10.525: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul 25 10:39:10.525: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-215" to be "running and ready"
Jul 25 10:39:10.550: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 25.347051ms
Jul 25 10:39:12.574: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049393291s
Jul 25 10:39:14.578: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.053462178s
Jul 25 10:39:14.578: INFO: Pod "pause" satisfied condition "running and ready"
Jul 25 10:39:14.578: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Jul 25 10:39:14.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-215'
Jul 25 10:39:14.691: INFO: stderr: ""
Jul 25 10:39:14.691: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul 25 10:39:14.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-215'
Jul 25 10:39:14.841: INFO: stderr: ""
Jul 25 10:39:14.841: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul 25 10:39:14.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-215'
Jul 25 10:39:14.967: INFO: stderr: ""
Jul 25 10:39:14.967: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul 25 10:39:14.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-215'
Jul 25 10:39:15.060: INFO: stderr: ""
Jul 25 10:39:15.060: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Jul 25 10:39:15.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-215'
Jul 25 10:39:15.227: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 10:39:15.227: INFO: stdout: "pod \"pause\" force deleted\n"
Jul 25 10:39:15.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-215'
Jul 25 10:39:15.328: INFO: stderr: "No resources found in kubectl-215 namespace.\n"
Jul 25 10:39:15.328: INFO: stdout: ""
Jul 25 10:39:15.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-215 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 25 10:39:15.575: INFO: stderr: ""
Jul 25 10:39:15.575: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:39:15.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-215" for this suite.

• [SLOW TEST:5.577 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":25,"skipped":611,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:39:15.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 25 10:39:25.829: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 25 10:39:25.833: INFO: Pod pod-with-prestop-http-hook still exists
Jul 25 10:39:27.833: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 25 10:39:27.838: INFO: Pod pod-with-prestop-http-hook still exists
Jul 25 10:39:29.833: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 25 10:39:29.838: INFO: Pod pod-with-prestop-http-hook still exists
Jul 25 10:39:31.833: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 25 10:39:31.838: INFO: Pod pod-with-prestop-http-hook still exists
Jul 25 10:39:33.833: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 25 10:39:33.837: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:39:33.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1216" for this suite.

• [SLOW TEST:18.169 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":616,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:39:33.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul 25 10:39:38.459: INFO: Successfully updated pod "annotationupdatea3300b14-7c80-4224-be15-d0ca7f2ffefa"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:39:40.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1921" for this suite.

• [SLOW TEST:6.650 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":630,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:39:40.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3721.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3721.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3721.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3721.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3721.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3721.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 10:39:47.201: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:47.230: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:47.305: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:47.309: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:47.327: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:47.330: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:47.332: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:47.335: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:47.340: INFO: Lookups using dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local]

Jul 25 10:39:52.346: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:52.348: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:52.351: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:52.353: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:52.361: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:52.364: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:52.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:52.369: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:52.375: INFO: Lookups using dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local]

Jul 25 10:39:57.345: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:57.349: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:57.351: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:57.353: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:57.361: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:57.363: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:57.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:57.368: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:39:57.374: INFO: Lookups using dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local]

Jul 25 10:40:02.345: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:02.348: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:02.353: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:02.355: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:02.364: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:02.367: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:02.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:02.373: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:02.379: INFO: Lookups using dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local]

Jul 25 10:40:07.346: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:07.350: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:07.353: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:07.356: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:07.366: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:07.369: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:07.372: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:07.376: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:07.382: INFO: Lookups using dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local]

Jul 25 10:40:12.344: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:12.348: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:12.350: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:12.354: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:12.363: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:12.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:12.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:12.372: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local from pod dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70: the server could not find the requested resource (get pods dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70)
Jul 25 10:40:12.379: INFO: Lookups using dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3721.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3721.svc.cluster.local jessie_udp@dns-test-service-2.dns-3721.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3721.svc.cluster.local]

Jul 25 10:40:17.380: INFO: DNS probes using dns-3721/dns-test-c2d5db7a-a4bb-4055-90ec-d728b7336a70 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:40:17.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3721" for this suite.

• [SLOW TEST:37.552 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":28,"skipped":632,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:40:18.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:40:18.120: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:40:19.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8083" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":29,"skipped":670,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:40:19.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 25 10:40:19.447: INFO: Waiting up to 5m0s for pod "downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f" in namespace "downward-api-9124" to be "Succeeded or Failed"
Jul 25 10:40:19.451: INFO: Pod "downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.818611ms
Jul 25 10:40:21.485: INFO: Pod "downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037905547s
Jul 25 10:40:23.488: INFO: Pod "downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f": Phase="Running", Reason="", readiness=true. Elapsed: 4.040610015s
Jul 25 10:40:25.493: INFO: Pod "downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04537218s
STEP: Saw pod success
Jul 25 10:40:25.493: INFO: Pod "downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f" satisfied condition "Succeeded or Failed"
Jul 25 10:40:25.496: INFO: Trying to get logs from node kali-worker2 pod downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f container dapi-container: 
STEP: delete the pod
Jul 25 10:40:25.535: INFO: Waiting for pod downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f to disappear
Jul 25 10:40:25.553: INFO: Pod downward-api-ea063c7e-4cad-4379-977f-7fb796755d5f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:40:25.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9124" for this suite.

• [SLOW TEST:6.225 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":703,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:40:25.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jul 25 10:40:31.731: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3409 PodName:pod-sharedvolume-9cd85e7c-536f-4c44-aca7-d6bd018afa44 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:40:31.731: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:40:31.767969       7 log.go:172] (0xc0024b2fd0) (0xc001698460) Create stream
I0725 10:40:31.768007       7 log.go:172] (0xc0024b2fd0) (0xc001698460) Stream added, broadcasting: 1
I0725 10:40:31.769829       7 log.go:172] (0xc0024b2fd0) Reply frame received for 1
I0725 10:40:31.769865       7 log.go:172] (0xc0024b2fd0) (0xc0016985a0) Create stream
I0725 10:40:31.769879       7 log.go:172] (0xc0024b2fd0) (0xc0016985a0) Stream added, broadcasting: 3
I0725 10:40:31.770626       7 log.go:172] (0xc0024b2fd0) Reply frame received for 3
I0725 10:40:31.770649       7 log.go:172] (0xc0024b2fd0) (0xc00296a640) Create stream
I0725 10:40:31.770657       7 log.go:172] (0xc0024b2fd0) (0xc00296a640) Stream added, broadcasting: 5
I0725 10:40:31.771312       7 log.go:172] (0xc0024b2fd0) Reply frame received for 5
I0725 10:40:31.863077       7 log.go:172] (0xc0024b2fd0) Data frame received for 5
I0725 10:40:31.863118       7 log.go:172] (0xc00296a640) (5) Data frame handling
I0725 10:40:31.863169       7 log.go:172] (0xc0024b2fd0) Data frame received for 3
I0725 10:40:31.863224       7 log.go:172] (0xc0016985a0) (3) Data frame handling
I0725 10:40:31.863255       7 log.go:172] (0xc0016985a0) (3) Data frame sent
I0725 10:40:31.863285       7 log.go:172] (0xc0024b2fd0) Data frame received for 3
I0725 10:40:31.863320       7 log.go:172] (0xc0016985a0) (3) Data frame handling
I0725 10:40:31.865260       7 log.go:172] (0xc0024b2fd0) Data frame received for 1
I0725 10:40:31.865344       7 log.go:172] (0xc001698460) (1) Data frame handling
I0725 10:40:31.865390       7 log.go:172] (0xc001698460) (1) Data frame sent
I0725 10:40:31.865422       7 log.go:172] (0xc0024b2fd0) (0xc001698460) Stream removed, broadcasting: 1
I0725 10:40:31.865442       7 log.go:172] (0xc0024b2fd0) Go away received
I0725 10:40:31.865602       7 log.go:172] (0xc0024b2fd0) (0xc001698460) Stream removed, broadcasting: 1
I0725 10:40:31.865632       7 log.go:172] (0xc0024b2fd0) (0xc0016985a0) Stream removed, broadcasting: 3
I0725 10:40:31.865656       7 log.go:172] (0xc0024b2fd0) (0xc00296a640) Stream removed, broadcasting: 5
Jul 25 10:40:31.865: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:40:31.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3409" for this suite.

• [SLOW TEST:6.312 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":31,"skipped":774,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:40:31.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-201797aa-4cbe-4c36-b255-5294703b379d
STEP: Creating a pod to test consume configMaps
Jul 25 10:40:31.977: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081" in namespace "configmap-4287" to be "Succeeded or Failed"
Jul 25 10:40:31.989: INFO: Pod "pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081": Phase="Pending", Reason="", readiness=false. Elapsed: 12.08557ms
Jul 25 10:40:33.993: INFO: Pod "pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015904145s
Jul 25 10:40:35.997: INFO: Pod "pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081": Phase="Running", Reason="", readiness=true. Elapsed: 4.020236969s
Jul 25 10:40:38.001: INFO: Pod "pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024762556s
STEP: Saw pod success
Jul 25 10:40:38.002: INFO: Pod "pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081" satisfied condition "Succeeded or Failed"
Jul 25 10:40:38.005: INFO: Trying to get logs from node kali-worker pod pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081 container configmap-volume-test: 
STEP: delete the pod
Jul 25 10:40:38.073: INFO: Waiting for pod pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081 to disappear
Jul 25 10:40:38.080: INFO: Pod pod-configmaps-e0e1f00b-d8f8-4f2c-8596-d620a39a5081 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:40:38.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4287" for this suite.

• [SLOW TEST:6.212 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":788,"failed":0}
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:40:38.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul 25 10:40:38.226: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5841 /api/v1/namespaces/watch-5841/configmaps/e2e-watch-test-label-changed f0afa54a-50a3-4059-b736-9674f8f47bdb 4016706 0 2020-07-25 10:40:38 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-25 10:40:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:40:38.226: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5841 /api/v1/namespaces/watch-5841/configmaps/e2e-watch-test-label-changed f0afa54a-50a3-4059-b736-9674f8f47bdb 4016707 0 2020-07-25 10:40:38 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-25 10:40:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:40:38.226: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5841 /api/v1/namespaces/watch-5841/configmaps/e2e-watch-test-label-changed f0afa54a-50a3-4059-b736-9674f8f47bdb 4016708 0 2020-07-25 10:40:38 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-25 10:40:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul 25 10:40:48.255: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5841 /api/v1/namespaces/watch-5841/configmaps/e2e-watch-test-label-changed f0afa54a-50a3-4059-b736-9674f8f47bdb 4016773 0 2020-07-25 10:40:38 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-25 10:40:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:40:48.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5841 /api/v1/namespaces/watch-5841/configmaps/e2e-watch-test-label-changed f0afa54a-50a3-4059-b736-9674f8f47bdb 4016774 0 2020-07-25 10:40:38 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-25 10:40:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:40:48.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5841 /api/v1/namespaces/watch-5841/configmaps/e2e-watch-test-label-changed f0afa54a-50a3-4059-b736-9674f8f47bdb 4016775 0 2020-07-25 10:40:38 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-25 10:40:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:40:48.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5841" for this suite.

• [SLOW TEST:10.190 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":33,"skipped":788,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:40:48.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:40:48.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1086'
Jul 25 10:40:48.645: INFO: stderr: ""
Jul 25 10:40:48.645: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jul 25 10:40:48.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1086'
Jul 25 10:40:48.977: INFO: stderr: ""
Jul 25 10:40:48.977: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 25 10:40:49.981: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 10:40:49.981: INFO: Found 0 / 1
Jul 25 10:40:51.072: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 10:40:51.072: INFO: Found 0 / 1
Jul 25 10:40:51.981: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 10:40:51.981: INFO: Found 0 / 1
Jul 25 10:40:52.981: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 10:40:52.981: INFO: Found 1 / 1
Jul 25 10:40:52.981: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 25 10:40:52.997: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 10:40:52.997: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 25 10:40:52.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe pod agnhost-master-znnm4 --namespace=kubectl-1086'
Jul 25 10:40:53.118: INFO: stderr: ""
Jul 25 10:40:53.118: INFO: stdout: "Name:         agnhost-master-znnm4\nNamespace:    kubectl-1086\nPriority:     0\nNode:         kali-worker/172.18.0.13\nStart Time:   Sat, 25 Jul 2020 10:40:48 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.142\nIPs:\n  IP:           10.244.2.142\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://0f8dccff2914ac4f8f316c4f662db2c14cdef0e702e2b7e5508e72748c35bb02\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 25 Jul 2020 10:40:51 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z5qbn (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-z5qbn:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-z5qbn\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  5s    default-scheduler     Successfully assigned kubectl-1086/agnhost-master-znnm4 to kali-worker\n  Normal  Pulled     3s    kubelet, kali-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s    kubelet, kali-worker  Created container agnhost-master\n  Normal  Started    2s    kubelet, kali-worker  Started container agnhost-master\n"
Jul 25 10:40:53.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1086'
Jul 25 10:40:53.232: INFO: stderr: ""
Jul 25 10:40:53.232: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-1086\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-znnm4\n"
Jul 25 10:40:53.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1086'
Jul 25 10:40:53.337: INFO: stderr: ""
Jul 25 10:40:53.337: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-1086\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.191.74\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.142:6379\nSession Affinity:  None\nEvents:            \n"
Jul 25 10:40:53.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Jul 25 10:40:53.602: INFO: stderr: ""
Jul 25 10:40:53.602: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 10 Jul 2020 10:27:46 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Sat, 25 Jul 2020 10:40:50 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 25 Jul 2020 10:35:56 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 25 Jul 2020 10:35:56 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 25 Jul 2020 10:35:56 +0000   Fri, 10 Jul 2020 10:27:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 25 Jul 2020 10:35:56 +0000   Fri, 10 Jul 2020 10:28:23 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.16\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 d83d42c4b42d4de1b3233683d9cadf95\n  System UUID:                e06c57c7-ce4f-4ae9-8bb6-40f1dc0e1a64\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.0-beta.1-34-g49b0743c\n  Kubelet Version:            v1.18.4\n  Kube-Proxy Version:         v1.18.4\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-qtcqs                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     15d\n  kube-system                 coredns-66bff467f8-tjkg9                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     15d\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                 kindnet-zxw2f                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      15d\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                 kube-proxy-xmqbs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         15d\n  local-path-storage          local-path-provisioner-67795f75bd-clsb6       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Jul 25 10:40:53.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe namespace kubectl-1086'
Jul 25 10:40:53.708: INFO: stderr: ""
Jul 25 10:40:53.708: INFO: stdout: "Name:         kubectl-1086\nLabels:       e2e-framework=kubectl\n              e2e-run=1d0a527c-4d86-45e0-a0d9-150e97f4c9a7\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:40:53.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1086" for this suite.

• [SLOW TEST:5.439 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":34,"skipped":792,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:40:53.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-61a4cb52-d48e-4c1b-a417-66728eb943ce
STEP: Creating a pod to test consume secrets
Jul 25 10:40:53.978: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e" in namespace "projected-2007" to be "Succeeded or Failed"
Jul 25 10:40:54.011: INFO: Pod "pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.863469ms
Jul 25 10:40:56.118: INFO: Pod "pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139171941s
Jul 25 10:40:58.275: INFO: Pod "pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296990243s
Jul 25 10:41:00.451: INFO: Pod "pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.4728876s
STEP: Saw pod success
Jul 25 10:41:00.451: INFO: Pod "pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e" satisfied condition "Succeeded or Failed"
Jul 25 10:41:00.455: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e container projected-secret-volume-test: 
STEP: delete the pod
Jul 25 10:41:00.943: INFO: Waiting for pod pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e to disappear
Jul 25 10:41:00.954: INFO: Pod pod-projected-secrets-20013267-e227-40a1-986e-e5decfeb1d8e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:41:00.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2007" for this suite.

• [SLOW TEST:7.243 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":820,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:41:00.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Jul 25 10:41:01.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8403'
Jul 25 10:41:01.770: INFO: stderr: ""
Jul 25 10:41:01.770: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 25 10:41:01.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8403'
Jul 25 10:41:02.020: INFO: stderr: ""
Jul 25 10:41:02.020: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jul 25 10:41:07.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8403'
Jul 25 10:41:07.124: INFO: stderr: ""
Jul 25 10:41:07.124: INFO: stdout: "update-demo-nautilus-7fvpp update-demo-nautilus-nxq6w "
Jul 25 10:41:07.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fvpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:07.216: INFO: stderr: ""
Jul 25 10:41:07.216: INFO: stdout: "true"
Jul 25 10:41:07.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fvpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:07.308: INFO: stderr: ""
Jul 25 10:41:07.308: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 25 10:41:07.308: INFO: validating pod update-demo-nautilus-7fvpp
Jul 25 10:41:07.313: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 25 10:41:07.313: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 25 10:41:07.313: INFO: update-demo-nautilus-7fvpp is verified up and running
Jul 25 10:41:07.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxq6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:07.416: INFO: stderr: ""
Jul 25 10:41:07.416: INFO: stdout: "true"
Jul 25 10:41:07.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxq6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:07.510: INFO: stderr: ""
Jul 25 10:41:07.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 25 10:41:07.510: INFO: validating pod update-demo-nautilus-nxq6w
Jul 25 10:41:07.515: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 25 10:41:07.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 25 10:41:07.515: INFO: update-demo-nautilus-nxq6w is verified up and running
STEP: scaling down the replication controller
Jul 25 10:41:07.517: INFO: scanned /root for discovery docs: 
Jul 25 10:41:07.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8403'
Jul 25 10:41:08.663: INFO: stderr: ""
Jul 25 10:41:08.663: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 25 10:41:08.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8403'
Jul 25 10:41:08.762: INFO: stderr: ""
Jul 25 10:41:08.762: INFO: stdout: "update-demo-nautilus-7fvpp update-demo-nautilus-nxq6w "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 25 10:41:13.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8403'
Jul 25 10:41:13.866: INFO: stderr: ""
Jul 25 10:41:13.866: INFO: stdout: "update-demo-nautilus-7fvpp "
Jul 25 10:41:13.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fvpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:13.964: INFO: stderr: ""
Jul 25 10:41:13.964: INFO: stdout: "true"
Jul 25 10:41:13.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fvpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:14.063: INFO: stderr: ""
Jul 25 10:41:14.063: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 25 10:41:14.063: INFO: validating pod update-demo-nautilus-7fvpp
Jul 25 10:41:14.066: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 25 10:41:14.066: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 25 10:41:14.066: INFO: update-demo-nautilus-7fvpp is verified up and running
STEP: scaling up the replication controller
Jul 25 10:41:14.068: INFO: scanned /root for discovery docs: 
Jul 25 10:41:14.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8403'
Jul 25 10:41:15.192: INFO: stderr: ""
Jul 25 10:41:15.192: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 25 10:41:15.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8403'
Jul 25 10:41:15.301: INFO: stderr: ""
Jul 25 10:41:15.301: INFO: stdout: "update-demo-nautilus-5947m update-demo-nautilus-7fvpp "
Jul 25 10:41:15.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5947m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:15.393: INFO: stderr: ""
Jul 25 10:41:15.393: INFO: stdout: ""
Jul 25 10:41:15.393: INFO: update-demo-nautilus-5947m is created but not running
Jul 25 10:41:20.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8403'
Jul 25 10:41:20.496: INFO: stderr: ""
Jul 25 10:41:20.497: INFO: stdout: "update-demo-nautilus-5947m update-demo-nautilus-7fvpp "
Jul 25 10:41:20.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5947m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:20.599: INFO: stderr: ""
Jul 25 10:41:20.599: INFO: stdout: "true"
Jul 25 10:41:20.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5947m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:20.687: INFO: stderr: ""
Jul 25 10:41:20.687: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 25 10:41:20.687: INFO: validating pod update-demo-nautilus-5947m
Jul 25 10:41:20.692: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 25 10:41:20.692: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 25 10:41:20.692: INFO: update-demo-nautilus-5947m is verified up and running
Jul 25 10:41:20.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fvpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:20.787: INFO: stderr: ""
Jul 25 10:41:20.787: INFO: stdout: "true"
Jul 25 10:41:20.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fvpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8403'
Jul 25 10:41:20.889: INFO: stderr: ""
Jul 25 10:41:20.889: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 25 10:41:20.889: INFO: validating pod update-demo-nautilus-7fvpp
Jul 25 10:41:20.893: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 25 10:41:20.893: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 25 10:41:20.893: INFO: update-demo-nautilus-7fvpp is verified up and running
STEP: using delete to clean up resources
Jul 25 10:41:20.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8403'
Jul 25 10:41:20.996: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 10:41:20.996: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 25 10:41:20.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8403'
Jul 25 10:41:21.103: INFO: stderr: "No resources found in kubectl-8403 namespace.\n"
Jul 25 10:41:21.103: INFO: stdout: ""
Jul 25 10:41:21.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8403 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 25 10:41:21.197: INFO: stderr: ""
Jul 25 10:41:21.197: INFO: stdout: "update-demo-nautilus-5947m\nupdate-demo-nautilus-7fvpp\n"
Jul 25 10:41:21.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8403'
Jul 25 10:41:21.788: INFO: stderr: "No resources found in kubectl-8403 namespace.\n"
Jul 25 10:41:21.788: INFO: stdout: ""
Jul 25 10:41:21.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8403 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 25 10:41:21.885: INFO: stderr: ""
Jul 25 10:41:21.885: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:41:21.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8403" for this suite.

• [SLOW TEST:20.948 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":36,"skipped":834,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:41:21.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-dc1b608a-46be-4bfc-a53f-3ee3aa350384
STEP: Creating a pod to test consume secrets
Jul 25 10:41:22.333: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a5631de6-33df-42c6-9540-a293704965d0" in namespace "projected-2262" to be "Succeeded or Failed"
Jul 25 10:41:22.461: INFO: Pod "pod-projected-secrets-a5631de6-33df-42c6-9540-a293704965d0": Phase="Pending", Reason="", readiness=false. Elapsed: 128.213271ms
Jul 25 10:41:24.466: INFO: Pod "pod-projected-secrets-a5631de6-33df-42c6-9540-a293704965d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132657262s
Jul 25 10:41:26.468: INFO: Pod "pod-projected-secrets-a5631de6-33df-42c6-9540-a293704965d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135448664s
STEP: Saw pod success
Jul 25 10:41:26.468: INFO: Pod "pod-projected-secrets-a5631de6-33df-42c6-9540-a293704965d0" satisfied condition "Succeeded or Failed"
Jul 25 10:41:26.471: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-a5631de6-33df-42c6-9540-a293704965d0 container projected-secret-volume-test: 
STEP: delete the pod
Jul 25 10:41:26.567: INFO: Waiting for pod pod-projected-secrets-a5631de6-33df-42c6-9540-a293704965d0 to disappear
Jul 25 10:41:26.628: INFO: Pod pod-projected-secrets-a5631de6-33df-42c6-9540-a293704965d0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:41:26.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2262" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":841,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:41:26.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-220f2fba-9a0b-46b7-b076-5c9f4462284e
STEP: Creating a pod to test consume configMaps
Jul 25 10:41:26.761: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b2e435a-5cbb-4343-97f0-8b53cee9f5e7" in namespace "projected-7339" to be "Succeeded or Failed"
Jul 25 10:41:26.765: INFO: Pod "pod-projected-configmaps-1b2e435a-5cbb-4343-97f0-8b53cee9f5e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.987888ms
Jul 25 10:41:28.769: INFO: Pod "pod-projected-configmaps-1b2e435a-5cbb-4343-97f0-8b53cee9f5e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008070761s
Jul 25 10:41:30.821: INFO: Pod "pod-projected-configmaps-1b2e435a-5cbb-4343-97f0-8b53cee9f5e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059657548s
STEP: Saw pod success
Jul 25 10:41:30.821: INFO: Pod "pod-projected-configmaps-1b2e435a-5cbb-4343-97f0-8b53cee9f5e7" satisfied condition "Succeeded or Failed"
Jul 25 10:41:30.823: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-1b2e435a-5cbb-4343-97f0-8b53cee9f5e7 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 25 10:41:30.855: INFO: Waiting for pod pod-projected-configmaps-1b2e435a-5cbb-4343-97f0-8b53cee9f5e7 to disappear
Jul 25 10:41:30.865: INFO: Pod pod-projected-configmaps-1b2e435a-5cbb-4343-97f0-8b53cee9f5e7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:41:30.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7339" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":843,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:41:30.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul 25 10:41:30.984: INFO: Pod name pod-release: Found 0 pods out of 1
Jul 25 10:41:36.030: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:41:36.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8253" for this suite.

• [SLOW TEST:5.346 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":39,"skipped":871,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:41:36.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 25 10:41:41.621: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:41:41.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9135" for this suite.

• [SLOW TEST:5.957 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":926,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:41:42.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Jul 25 10:41:47.114: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1083 pod-service-account-0ea1b36b-634d-4da7-84ad-463836b6702e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jul 25 10:41:47.393: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1083 pod-service-account-0ea1b36b-634d-4da7-84ad-463836b6702e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jul 25 10:41:47.608: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1083 pod-service-account-0ea1b36b-634d-4da7-84ad-463836b6702e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:41:47.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1083" for this suite.

• [SLOW TEST:5.654 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":41,"skipped":983,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:41:47.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-717af4ce-cd0c-4df7-96fe-72c7d1a8fea5
STEP: Creating a pod to test consume configMaps
Jul 25 10:41:47.936: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a6bee2b-b708-4813-bf34-a40e9ba1bd30" in namespace "projected-3938" to be "Succeeded or Failed"
Jul 25 10:41:48.013: INFO: Pod "pod-projected-configmaps-1a6bee2b-b708-4813-bf34-a40e9ba1bd30": Phase="Pending", Reason="", readiness=false. Elapsed: 76.4946ms
Jul 25 10:41:50.030: INFO: Pod "pod-projected-configmaps-1a6bee2b-b708-4813-bf34-a40e9ba1bd30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094384624s
Jul 25 10:41:52.035: INFO: Pod "pod-projected-configmaps-1a6bee2b-b708-4813-bf34-a40e9ba1bd30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099058882s
STEP: Saw pod success
Jul 25 10:41:52.035: INFO: Pod "pod-projected-configmaps-1a6bee2b-b708-4813-bf34-a40e9ba1bd30" satisfied condition "Succeeded or Failed"
Jul 25 10:41:52.039: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-1a6bee2b-b708-4813-bf34-a40e9ba1bd30 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 25 10:41:52.076: INFO: Waiting for pod pod-projected-configmaps-1a6bee2b-b708-4813-bf34-a40e9ba1bd30 to disappear
Jul 25 10:41:52.083: INFO: Pod pod-projected-configmaps-1a6bee2b-b708-4813-bf34-a40e9ba1bd30 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:41:52.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3938" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":1020,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:41:52.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:41:52.209: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul 25 10:41:57.216: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 25 10:41:57.216: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul 25 10:41:59.221: INFO: Creating deployment "test-rollover-deployment"
Jul 25 10:41:59.250: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul 25 10:42:01.306: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul 25 10:42:01.312: INFO: Ensure that both replica sets have 1 created replica
Jul 25 10:42:01.318: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul 25 10:42:01.329: INFO: Updating deployment test-rollover-deployment
Jul 25 10:42:01.329: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul 25 10:42:03.342: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul 25 10:42:03.348: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul 25 10:42:03.354: INFO: all replica sets need to contain the pod-template-hash label
Jul 25 10:42:03.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270521, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:42:05.361: INFO: all replica sets need to contain the pod-template-hash label
Jul 25 10:42:05.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270525, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:42:07.362: INFO: all replica sets need to contain the pod-template-hash label
Jul 25 10:42:07.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270525, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:42:09.412: INFO: all replica sets need to contain the pod-template-hash label
Jul 25 10:42:09.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270525, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:42:11.792: INFO: all replica sets need to contain the pod-template-hash label
Jul 25 10:42:11.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270525, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:42:13.361: INFO: all replica sets need to contain the pod-template-hash label
Jul 25 10:42:13.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270525, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:42:15.491: INFO: 
Jul 25 10:42:15.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270535, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270519, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:42:17.360: INFO: 
Jul 25 10:42:17.360: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 25 10:42:17.368: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-924 /apis/apps/v1/namespaces/deployment-924/deployments/test-rollover-deployment a2ab7180-8992-4b38-afc1-0ab4289c14e1 4017583 2 2020-07-25 10:41:59 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-25 10:42:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-25 10:42:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ee86e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-25 10:41:59 +0000 UTC,LastTransitionTime:2020-07-25 10:41:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-07-25 10:42:15 +0000 UTC,LastTransitionTime:2020-07-25 10:41:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul 25 10:42:17.372: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-924 /apis/apps/v1/namespaces/deployment-924/replicasets/test-rollover-deployment-84f7f6f64b 593e92a4-0e10-49e8-a7d3-644e502584fb 4017570 2 2020-07-25 10:42:01 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a2ab7180-8992-4b38-afc1-0ab4289c14e1 0xc002ee9027 0xc002ee9028}] []  [{kube-controller-manager Update apps/v1 2020-07-25 10:42:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 50 97 98 55 49 56 48 45 56 57 57 50 45 52 98 51 56 45 97 102 99 49 45 48 97 98 52 50 56 57 99 49 52 101 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ee90b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 25 10:42:17.372: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul 25 10:42:17.372: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-924 /apis/apps/v1/namespaces/deployment-924/replicasets/test-rollover-controller e7411b1e-9e7b-4563-a0db-4bf136d3f292 4017581 2 2020-07-25 10:41:52 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a2ab7180-8992-4b38-afc1-0ab4289c14e1 0xc002ee8aef 0xc002ee8b00}] []  [{e2e.test Update apps/v1 2020-07-25 10:41:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-25 10:42:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 50 97 98 55 49 56 48 45 56 57 57 50 45 52 98 51 56 45 97 102 99 49 45 48 97 98 52 50 56 57 99 49 52 101 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ee8b98  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 25 10:42:17.373: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-924 /apis/apps/v1/namespaces/deployment-924/replicasets/test-rollover-deployment-5686c4cfd5 60e5c9c1-f9a3-404a-9f1a-60e827b31864 4017458 2 2020-07-25 10:41:59 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a2ab7180-8992-4b38-afc1-0ab4289c14e1 0xc002ee8c07 0xc002ee8c08}] []  [{kube-controller-manager Update apps/v1 2020-07-25 10:42:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 50 97 98 55 49 56 48 45 56 57 57 50 45 52 98 51 56 45 97 102 99 49 45 48 97 98 52 50 56 57 99 49 52 101 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ee8c98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 25 10:42:17.376: INFO: Pod "test-rollover-deployment-84f7f6f64b-ts6bx" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-ts6bx test-rollover-deployment-84f7f6f64b- deployment-924 /api/v1/namespaces/deployment-924/pods/test-rollover-deployment-84f7f6f64b-ts6bx 11337ffb-6ae9-4f44-95b3-2092fd2e6e82 4017488 0 2020-07-25 10:42:01 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 593e92a4-0e10-49e8-a7d3-644e502584fb 0xc002bc9867 0xc002bc9868}] []  [{kube-controller-manager Update v1 2020-07-25 10:42:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 57 51 101 57 50 97 52 45 48 101 49 48 45 52 57 101 56 45 97 55 100 51 45 54 52 52 101 53 48 50 53 56 52 102 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:42:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jj5xb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jj5xb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jj5xb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:42:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:42:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:42:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:42:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.12,StartTime:2020-07-25 10:42:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:42:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://2a1cbdbf8210f7b490943863bed2d611a3f0f177c8bb62ce1b03b4d821367ee1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:42:17.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-924" for this suite.

• [SLOW TEST:25.313 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":43,"skipped":1045,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:42:17.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-6f1b7dc1-4d3c-457d-ba70-a157b63d4171
STEP: Creating a pod to test consume configMaps
Jul 25 10:42:17.534: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-60b30195-00b9-4156-b880-ff70d916429c" in namespace "projected-8594" to be "Succeeded or Failed"
Jul 25 10:42:17.538: INFO: Pod "pod-projected-configmaps-60b30195-00b9-4156-b880-ff70d916429c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.750764ms
Jul 25 10:42:19.541: INFO: Pod "pod-projected-configmaps-60b30195-00b9-4156-b880-ff70d916429c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007449425s
Jul 25 10:42:21.545: INFO: Pod "pod-projected-configmaps-60b30195-00b9-4156-b880-ff70d916429c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011092149s
STEP: Saw pod success
Jul 25 10:42:21.545: INFO: Pod "pod-projected-configmaps-60b30195-00b9-4156-b880-ff70d916429c" satisfied condition "Succeeded or Failed"
Jul 25 10:42:21.548: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-60b30195-00b9-4156-b880-ff70d916429c container projected-configmap-volume-test: 
STEP: delete the pod
Jul 25 10:42:21.578: INFO: Waiting for pod pod-projected-configmaps-60b30195-00b9-4156-b880-ff70d916429c to disappear
Jul 25 10:42:21.610: INFO: Pod pod-projected-configmaps-60b30195-00b9-4156-b880-ff70d916429c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:42:21.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8594" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":1048,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:42:21.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 25 10:42:21.688: INFO: Waiting up to 5m0s for pod "downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e" in namespace "downward-api-8621" to be "Succeeded or Failed"
Jul 25 10:42:21.702: INFO: Pod "downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.075689ms
Jul 25 10:42:23.840: INFO: Pod "downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151209839s
Jul 25 10:42:25.844: INFO: Pod "downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155443874s
Jul 25 10:42:27.849: INFO: Pod "downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160291871s
STEP: Saw pod success
Jul 25 10:42:27.849: INFO: Pod "downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e" satisfied condition "Succeeded or Failed"
Jul 25 10:42:27.852: INFO: Trying to get logs from node kali-worker2 pod downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e container dapi-container: 
STEP: delete the pod
Jul 25 10:42:27.916: INFO: Waiting for pod downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e to disappear
Jul 25 10:42:27.927: INFO: Pod downward-api-667a714a-8a47-4a6e-9888-18104fd1d59e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:42:27.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8621" for this suite.

• [SLOW TEST:6.316 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":1071,"failed":0}
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:42:27.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-9678
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 25 10:42:27.978: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 25 10:42:28.122: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 10:42:30.127: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 10:42:32.125: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 10:42:34.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 10:42:36.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 10:42:38.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 10:42:40.127: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 10:42:42.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 10:42:44.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 10:42:46.127: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 25 10:42:46.133: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul 25 10:42:48.137: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul 25 10:42:50.137: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 25 10:42:56.245: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.153 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9678 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:42:56.245: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:42:56.277831       7 log.go:172] (0xc0024b3760) (0xc002a43d60) Create stream
I0725 10:42:56.277860       7 log.go:172] (0xc0024b3760) (0xc002a43d60) Stream added, broadcasting: 1
I0725 10:42:56.279652       7 log.go:172] (0xc0024b3760) Reply frame received for 1
I0725 10:42:56.279701       7 log.go:172] (0xc0024b3760) (0xc002a43e00) Create stream
I0725 10:42:56.279724       7 log.go:172] (0xc0024b3760) (0xc002a43e00) Stream added, broadcasting: 3
I0725 10:42:56.280829       7 log.go:172] (0xc0024b3760) Reply frame received for 3
I0725 10:42:56.280879       7 log.go:172] (0xc0024b3760) (0xc00283b040) Create stream
I0725 10:42:56.280895       7 log.go:172] (0xc0024b3760) (0xc00283b040) Stream added, broadcasting: 5
I0725 10:42:56.281948       7 log.go:172] (0xc0024b3760) Reply frame received for 5
I0725 10:42:57.358225       7 log.go:172] (0xc0024b3760) Data frame received for 3
I0725 10:42:57.358309       7 log.go:172] (0xc002a43e00) (3) Data frame handling
I0725 10:42:57.358370       7 log.go:172] (0xc002a43e00) (3) Data frame sent
I0725 10:42:57.358416       7 log.go:172] (0xc0024b3760) Data frame received for 3
I0725 10:42:57.358457       7 log.go:172] (0xc002a43e00) (3) Data frame handling
I0725 10:42:57.358510       7 log.go:172] (0xc0024b3760) Data frame received for 5
I0725 10:42:57.358553       7 log.go:172] (0xc00283b040) (5) Data frame handling
I0725 10:42:57.367926       7 log.go:172] (0xc0024b3760) Data frame received for 1
I0725 10:42:57.367965       7 log.go:172] (0xc002a43d60) (1) Data frame handling
I0725 10:42:57.368011       7 log.go:172] (0xc002a43d60) (1) Data frame sent
I0725 10:42:57.368035       7 log.go:172] (0xc0024b3760) (0xc002a43d60) Stream removed, broadcasting: 1
I0725 10:42:57.368056       7 log.go:172] (0xc0024b3760) Go away received
I0725 10:42:57.368158       7 log.go:172] (0xc0024b3760) (0xc002a43d60) Stream removed, broadcasting: 1
I0725 10:42:57.368178       7 log.go:172] (0xc0024b3760) (0xc002a43e00) Stream removed, broadcasting: 3
I0725 10:42:57.368193       7 log.go:172] (0xc0024b3760) (0xc00283b040) Stream removed, broadcasting: 5
Jul 25 10:42:57.368: INFO: Found all expected endpoints: [netserver-0]
Jul 25 10:42:57.396: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.15 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9678 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:42:57.396: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:42:57.429710       7 log.go:172] (0xc0027200b0) (0xc002af2280) Create stream
I0725 10:42:57.429752       7 log.go:172] (0xc0027200b0) (0xc002af2280) Stream added, broadcasting: 1
I0725 10:42:57.431563       7 log.go:172] (0xc0027200b0) Reply frame received for 1
I0725 10:42:57.431602       7 log.go:172] (0xc0027200b0) (0xc002af2320) Create stream
I0725 10:42:57.431609       7 log.go:172] (0xc0027200b0) (0xc002af2320) Stream added, broadcasting: 3
I0725 10:42:57.432512       7 log.go:172] (0xc0027200b0) Reply frame received for 3
I0725 10:42:57.432545       7 log.go:172] (0xc0027200b0) (0xc002a43ea0) Create stream
I0725 10:42:57.432556       7 log.go:172] (0xc0027200b0) (0xc002a43ea0) Stream added, broadcasting: 5
I0725 10:42:57.433636       7 log.go:172] (0xc0027200b0) Reply frame received for 5
I0725 10:42:58.508155       7 log.go:172] (0xc0027200b0) Data frame received for 5
I0725 10:42:58.508197       7 log.go:172] (0xc002a43ea0) (5) Data frame handling
I0725 10:42:58.508222       7 log.go:172] (0xc0027200b0) Data frame received for 3
I0725 10:42:58.508238       7 log.go:172] (0xc002af2320) (3) Data frame handling
I0725 10:42:58.508253       7 log.go:172] (0xc002af2320) (3) Data frame sent
I0725 10:42:58.508263       7 log.go:172] (0xc0027200b0) Data frame received for 3
I0725 10:42:58.508276       7 log.go:172] (0xc002af2320) (3) Data frame handling
I0725 10:42:58.510292       7 log.go:172] (0xc0027200b0) Data frame received for 1
I0725 10:42:58.510337       7 log.go:172] (0xc002af2280) (1) Data frame handling
I0725 10:42:58.510394       7 log.go:172] (0xc002af2280) (1) Data frame sent
I0725 10:42:58.510436       7 log.go:172] (0xc0027200b0) (0xc002af2280) Stream removed, broadcasting: 1
I0725 10:42:58.510502       7 log.go:172] (0xc0027200b0) Go away received
I0725 10:42:58.510587       7 log.go:172] (0xc0027200b0) (0xc002af2280) Stream removed, broadcasting: 1
I0725 10:42:58.510654       7 log.go:172] (0xc0027200b0) (0xc002af2320) Stream removed, broadcasting: 3
I0725 10:42:58.510676       7 log.go:172] (0xc0027200b0) (0xc002a43ea0) Stream removed, broadcasting: 5
Jul 25 10:42:58.510: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:42:58.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9678" for this suite.

• [SLOW TEST:30.587 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":1071,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:42:58.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 10:42:59.519: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 10:43:01.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270579, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270579, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270579, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270579, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 10:43:04.876: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:43:04.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6979-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:06.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2906" for this suite.
STEP: Destroying namespace "webhook-2906-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.814 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":47,"skipped":1096,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:06.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 25 10:43:10.975: INFO: Successfully updated pod "pod-update-bb199324-50a6-4b76-a924-c9e27669c8cc"
STEP: verifying the updated pod is in kubernetes
Jul 25 10:43:11.013: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:11.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1507" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":1120,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:11.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jul 25 10:43:11.091: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Jul 25 10:43:11.911: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jul 25 10:43:14.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270591, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270591, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270591, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:43:16.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270591, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270591, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270592, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270591, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:43:18.999: INFO: Waited 521.740586ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:19.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1913" for this suite.

• [SLOW TEST:8.826 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":49,"skipped":1128,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:19.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-00f16964-2c61-41da-8952-bca834ad1378
STEP: Creating a pod to test consume configMaps
Jul 25 10:43:20.282: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdc07b3b-7051-40a6-a4f1-278a114d372d" in namespace "configmap-2601" to be "Succeeded or Failed"
Jul 25 10:43:20.310: INFO: Pod "pod-configmaps-bdc07b3b-7051-40a6-a4f1-278a114d372d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.975807ms
Jul 25 10:43:22.314: INFO: Pod "pod-configmaps-bdc07b3b-7051-40a6-a4f1-278a114d372d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032242161s
Jul 25 10:43:24.319: INFO: Pod "pod-configmaps-bdc07b3b-7051-40a6-a4f1-278a114d372d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036588226s
STEP: Saw pod success
Jul 25 10:43:24.319: INFO: Pod "pod-configmaps-bdc07b3b-7051-40a6-a4f1-278a114d372d" satisfied condition "Succeeded or Failed"
Jul 25 10:43:24.322: INFO: Trying to get logs from node kali-worker pod pod-configmaps-bdc07b3b-7051-40a6-a4f1-278a114d372d container configmap-volume-test: 
STEP: delete the pod
Jul 25 10:43:24.354: INFO: Waiting for pod pod-configmaps-bdc07b3b-7051-40a6-a4f1-278a114d372d to disappear
Jul 25 10:43:24.356: INFO: Pod pod-configmaps-bdc07b3b-7051-40a6-a4f1-278a114d372d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:24.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2601" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":1139,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:24.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Jul 25 10:43:24.414: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:24.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8174" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":51,"skipped":1147,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:24.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-be23db83-8ab3-4105-9738-7d2279301811
STEP: Creating a pod to test consume configMaps
Jul 25 10:43:24.608: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e365d0b-0944-4998-b63f-c48cd3e9dcec" in namespace "projected-4921" to be "Succeeded or Failed"
Jul 25 10:43:24.833: INFO: Pod "pod-projected-configmaps-7e365d0b-0944-4998-b63f-c48cd3e9dcec": Phase="Pending", Reason="", readiness=false. Elapsed: 225.158836ms
Jul 25 10:43:26.837: INFO: Pod "pod-projected-configmaps-7e365d0b-0944-4998-b63f-c48cd3e9dcec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22956577s
Jul 25 10:43:28.907: INFO: Pod "pod-projected-configmaps-7e365d0b-0944-4998-b63f-c48cd3e9dcec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.298882351s
STEP: Saw pod success
Jul 25 10:43:28.907: INFO: Pod "pod-projected-configmaps-7e365d0b-0944-4998-b63f-c48cd3e9dcec" satisfied condition "Succeeded or Failed"
Jul 25 10:43:28.910: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-7e365d0b-0944-4998-b63f-c48cd3e9dcec container projected-configmap-volume-test: 
STEP: delete the pod
Jul 25 10:43:29.063: INFO: Waiting for pod pod-projected-configmaps-7e365d0b-0944-4998-b63f-c48cd3e9dcec to disappear
Jul 25 10:43:29.073: INFO: Pod pod-projected-configmaps-7e365d0b-0944-4998-b63f-c48cd3e9dcec no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:29.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4921" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":1154,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:29.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6204.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6204.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 10:43:35.196: INFO: DNS probes using dns-6204/dns-test-c3221da9-284c-46c4-ae85-96877e08760a succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:35.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6204" for this suite.

• [SLOW TEST:6.166 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":53,"skipped":1184,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:35.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:46.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6027" for this suite.

• [SLOW TEST:11.706 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":54,"skipped":1194,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:46.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 10:43:47.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356" in namespace "projected-4024" to be "Succeeded or Failed"
Jul 25 10:43:47.086: INFO: Pod "downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356": Phase="Pending", Reason="", readiness=false. Elapsed: 9.695194ms
Jul 25 10:43:49.090: INFO: Pod "downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013640509s
Jul 25 10:43:51.095: INFO: Pod "downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356": Phase="Running", Reason="", readiness=true. Elapsed: 4.01852038s
Jul 25 10:43:53.099: INFO: Pod "downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022761499s
STEP: Saw pod success
Jul 25 10:43:53.099: INFO: Pod "downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356" satisfied condition "Succeeded or Failed"
Jul 25 10:43:53.101: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356 container client-container: 
STEP: delete the pod
Jul 25 10:43:53.170: INFO: Waiting for pod downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356 to disappear
Jul 25 10:43:53.192: INFO: Pod downwardapi-volume-c207be1a-6204-4bc5-9051-8c06dc03b356 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:53.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4024" for this suite.

• [SLOW TEST:6.248 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":1206,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:53.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 25 10:43:53.278: INFO: Waiting up to 5m0s for pod "downward-api-d91da7e8-ce31-4fe1-adfc-0ca1d6b23ad3" in namespace "downward-api-9981" to be "Succeeded or Failed"
Jul 25 10:43:53.337: INFO: Pod "downward-api-d91da7e8-ce31-4fe1-adfc-0ca1d6b23ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 58.779865ms
Jul 25 10:43:55.340: INFO: Pod "downward-api-d91da7e8-ce31-4fe1-adfc-0ca1d6b23ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062244402s
Jul 25 10:43:57.343: INFO: Pod "downward-api-d91da7e8-ce31-4fe1-adfc-0ca1d6b23ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065501751s
STEP: Saw pod success
Jul 25 10:43:57.343: INFO: Pod "downward-api-d91da7e8-ce31-4fe1-adfc-0ca1d6b23ad3" satisfied condition "Succeeded or Failed"
Jul 25 10:43:57.345: INFO: Trying to get logs from node kali-worker2 pod downward-api-d91da7e8-ce31-4fe1-adfc-0ca1d6b23ad3 container dapi-container: 
STEP: delete the pod
Jul 25 10:43:57.383: INFO: Waiting for pod downward-api-d91da7e8-ce31-4fe1-adfc-0ca1d6b23ad3 to disappear
Jul 25 10:43:57.391: INFO: Pod downward-api-d91da7e8-ce31-4fe1-adfc-0ca1d6b23ad3 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:43:57.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9981" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":1215,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:43:57.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:44:03.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2880" for this suite.

• [SLOW TEST:6.104 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":1223,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:44:03.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 10:44:04.014: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 10:44:06.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270644, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270644, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270644, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270643, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 10:44:09.307: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:44:09.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6712" for this suite.
STEP: Destroying namespace "webhook-6712-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.181 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":58,"skipped":1234,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:44:09.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-e80f1f37-aaa6-4276-a42d-47b9830ffd48
STEP: Creating a pod to test consume secrets
Jul 25 10:44:09.748: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1084406-e4ff-4310-9234-07020e42a728" in namespace "projected-550" to be "Succeeded or Failed"
Jul 25 10:44:09.816: INFO: Pod "pod-projected-secrets-e1084406-e4ff-4310-9234-07020e42a728": Phase="Pending", Reason="", readiness=false. Elapsed: 68.301381ms
Jul 25 10:44:11.820: INFO: Pod "pod-projected-secrets-e1084406-e4ff-4310-9234-07020e42a728": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072338161s
Jul 25 10:44:13.823: INFO: Pod "pod-projected-secrets-e1084406-e4ff-4310-9234-07020e42a728": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075400927s
STEP: Saw pod success
Jul 25 10:44:13.823: INFO: Pod "pod-projected-secrets-e1084406-e4ff-4310-9234-07020e42a728" satisfied condition "Succeeded or Failed"
Jul 25 10:44:13.826: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-e1084406-e4ff-4310-9234-07020e42a728 container projected-secret-volume-test: 
STEP: delete the pod
Jul 25 10:44:14.076: INFO: Waiting for pod pod-projected-secrets-e1084406-e4ff-4310-9234-07020e42a728 to disappear
Jul 25 10:44:14.085: INFO: Pod pod-projected-secrets-e1084406-e4ff-4310-9234-07020e42a728 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:44:14.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-550" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":1271,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:44:14.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 10:44:14.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96c63f82-e825-4fca-81fa-0bccc276b60e" in namespace "downward-api-1119" to be "Succeeded or Failed"
Jul 25 10:44:14.229: INFO: Pod "downwardapi-volume-96c63f82-e825-4fca-81fa-0bccc276b60e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.609165ms
Jul 25 10:44:16.233: INFO: Pod "downwardapi-volume-96c63f82-e825-4fca-81fa-0bccc276b60e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007379528s
Jul 25 10:44:18.237: INFO: Pod "downwardapi-volume-96c63f82-e825-4fca-81fa-0bccc276b60e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011092271s
STEP: Saw pod success
Jul 25 10:44:18.237: INFO: Pod "downwardapi-volume-96c63f82-e825-4fca-81fa-0bccc276b60e" satisfied condition "Succeeded or Failed"
Jul 25 10:44:18.240: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-96c63f82-e825-4fca-81fa-0bccc276b60e container client-container: 
STEP: delete the pod
Jul 25 10:44:18.280: INFO: Waiting for pod downwardapi-volume-96c63f82-e825-4fca-81fa-0bccc276b60e to disappear
Jul 25 10:44:18.285: INFO: Pod downwardapi-volume-96c63f82-e825-4fca-81fa-0bccc276b60e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:44:18.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1119" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":1321,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:44:18.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9396
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-9396
I0725 10:44:18.487525       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9396, replica count: 2
I0725 10:44:21.537972       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:44:24.538236       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 25 10:44:24.538: INFO: Creating new exec pod
Jul 25 10:44:31.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9396 execpod8ls6g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 25 10:44:34.648: INFO: stderr: "I0725 10:44:34.557267    1300 log.go:172] (0xc0005a4d10) (0xc00068f900) Create stream\nI0725 10:44:34.557341    1300 log.go:172] (0xc0005a4d10) (0xc00068f900) Stream added, broadcasting: 1\nI0725 10:44:34.560328    1300 log.go:172] (0xc0005a4d10) Reply frame received for 1\nI0725 10:44:34.560374    1300 log.go:172] (0xc0005a4d10) (0xc00068f9a0) Create stream\nI0725 10:44:34.560385    1300 log.go:172] (0xc0005a4d10) (0xc00068f9a0) Stream added, broadcasting: 3\nI0725 10:44:34.561507    1300 log.go:172] (0xc0005a4d10) Reply frame received for 3\nI0725 10:44:34.561563    1300 log.go:172] (0xc0005a4d10) (0xc00043cbe0) Create stream\nI0725 10:44:34.561591    1300 log.go:172] (0xc0005a4d10) (0xc00043cbe0) Stream added, broadcasting: 5\nI0725 10:44:34.562497    1300 log.go:172] (0xc0005a4d10) Reply frame received for 5\nI0725 10:44:34.639949    1300 log.go:172] (0xc0005a4d10) Data frame received for 5\nI0725 10:44:34.639995    1300 log.go:172] (0xc00043cbe0) (5) Data frame handling\nI0725 10:44:34.640025    1300 log.go:172] (0xc00043cbe0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0725 10:44:34.640406    1300 log.go:172] (0xc0005a4d10) Data frame received for 5\nI0725 10:44:34.640445    1300 log.go:172] (0xc00043cbe0) (5) Data frame handling\nI0725 10:44:34.640477    1300 log.go:172] (0xc00043cbe0) (5) Data frame sent\nI0725 10:44:34.640497    1300 log.go:172] (0xc0005a4d10) Data frame received for 5\nI0725 10:44:34.640511    1300 log.go:172] (0xc00043cbe0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0725 10:44:34.640947    1300 log.go:172] (0xc0005a4d10) Data frame received for 3\nI0725 10:44:34.640977    1300 log.go:172] (0xc00068f9a0) (3) Data frame handling\nI0725 10:44:34.642795    1300 log.go:172] (0xc0005a4d10) Data frame received for 1\nI0725 10:44:34.642814    1300 log.go:172] (0xc00068f900) (1) Data frame handling\nI0725 10:44:34.642836    1300 log.go:172] (0xc00068f900) (1) Data frame sent\nI0725 10:44:34.642860    1300 log.go:172] (0xc0005a4d10) (0xc00068f900) Stream removed, broadcasting: 1\nI0725 10:44:34.642893    1300 log.go:172] (0xc0005a4d10) Go away received\nI0725 10:44:34.643216    1300 log.go:172] (0xc0005a4d10) (0xc00068f900) Stream removed, broadcasting: 1\nI0725 10:44:34.643230    1300 log.go:172] (0xc0005a4d10) (0xc00068f9a0) Stream removed, broadcasting: 3\nI0725 10:44:34.643236    1300 log.go:172] (0xc0005a4d10) (0xc00043cbe0) Stream removed, broadcasting: 5\n"
Jul 25 10:44:34.648: INFO: stdout: ""
Jul 25 10:44:34.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9396 execpod8ls6g -- /bin/sh -x -c nc -zv -t -w 2 10.108.57.217 80'
Jul 25 10:44:34.862: INFO: stderr: "I0725 10:44:34.787037    1335 log.go:172] (0xc000978bb0) (0xc0007a41e0) Create stream\nI0725 10:44:34.787088    1335 log.go:172] (0xc000978bb0) (0xc0007a41e0) Stream added, broadcasting: 1\nI0725 10:44:34.792441    1335 log.go:172] (0xc000978bb0) Reply frame received for 1\nI0725 10:44:34.792505    1335 log.go:172] (0xc000978bb0) (0xc00065e000) Create stream\nI0725 10:44:34.792570    1335 log.go:172] (0xc000978bb0) (0xc00065e000) Stream added, broadcasting: 3\nI0725 10:44:34.795419    1335 log.go:172] (0xc000978bb0) Reply frame received for 3\nI0725 10:44:34.795458    1335 log.go:172] (0xc000978bb0) (0xc0007a4280) Create stream\nI0725 10:44:34.795467    1335 log.go:172] (0xc000978bb0) (0xc0007a4280) Stream added, broadcasting: 5\nI0725 10:44:34.796510    1335 log.go:172] (0xc000978bb0) Reply frame received for 5\nI0725 10:44:34.856817    1335 log.go:172] (0xc000978bb0) Data frame received for 5\nI0725 10:44:34.856950    1335 log.go:172] (0xc0007a4280) (5) Data frame handling\nI0725 10:44:34.856973    1335 log.go:172] (0xc0007a4280) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.57.217 80\nConnection to 10.108.57.217 80 port [tcp/http] succeeded!\nI0725 10:44:34.857002    1335 log.go:172] (0xc000978bb0) Data frame received for 3\nI0725 10:44:34.857038    1335 log.go:172] (0xc00065e000) (3) Data frame handling\nI0725 10:44:34.857068    1335 log.go:172] (0xc000978bb0) Data frame received for 5\nI0725 10:44:34.857090    1335 log.go:172] (0xc0007a4280) (5) Data frame handling\nI0725 10:44:34.858137    1335 log.go:172] (0xc000978bb0) Data frame received for 1\nI0725 10:44:34.858174    1335 log.go:172] (0xc0007a41e0) (1) Data frame handling\nI0725 10:44:34.858195    1335 log.go:172] (0xc0007a41e0) (1) Data frame sent\nI0725 10:44:34.858242    1335 log.go:172] (0xc000978bb0) (0xc0007a41e0) Stream removed, broadcasting: 1\nI0725 10:44:34.858275    1335 log.go:172] (0xc000978bb0) Go away received\nI0725 10:44:34.858664    1335 log.go:172] (0xc000978bb0) (0xc0007a41e0) Stream removed, broadcasting: 1\nI0725 10:44:34.858699    1335 log.go:172] (0xc000978bb0) (0xc00065e000) Stream removed, broadcasting: 3\nI0725 10:44:34.858721    1335 log.go:172] (0xc000978bb0) (0xc0007a4280) Stream removed, broadcasting: 5\n"
Jul 25 10:44:34.862: INFO: stdout: ""
Jul 25 10:44:34.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9396 execpod8ls6g -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31079'
Jul 25 10:44:35.047: INFO: stderr: "I0725 10:44:34.974772    1356 log.go:172] (0xc000996000) (0xc000a32000) Create stream\nI0725 10:44:34.974838    1356 log.go:172] (0xc000996000) (0xc000a32000) Stream added, broadcasting: 1\nI0725 10:44:34.977672    1356 log.go:172] (0xc000996000) Reply frame received for 1\nI0725 10:44:34.977712    1356 log.go:172] (0xc000996000) (0xc000a32140) Create stream\nI0725 10:44:34.977728    1356 log.go:172] (0xc000996000) (0xc000a32140) Stream added, broadcasting: 3\nI0725 10:44:34.979378    1356 log.go:172] (0xc000996000) Reply frame received for 3\nI0725 10:44:34.979472    1356 log.go:172] (0xc000996000) (0xc0009c6000) Create stream\nI0725 10:44:34.979581    1356 log.go:172] (0xc000996000) (0xc0009c6000) Stream added, broadcasting: 5\nI0725 10:44:34.980714    1356 log.go:172] (0xc000996000) Reply frame received for 5\nI0725 10:44:35.040162    1356 log.go:172] (0xc000996000) Data frame received for 5\nI0725 10:44:35.040212    1356 log.go:172] (0xc0009c6000) (5) Data frame handling\nI0725 10:44:35.040234    1356 log.go:172] (0xc0009c6000) (5) Data frame sent\nI0725 10:44:35.040251    1356 log.go:172] (0xc000996000) Data frame received for 5\nI0725 10:44:35.040268    1356 log.go:172] (0xc0009c6000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31079\nConnection to 172.18.0.13 31079 port [tcp/31079] succeeded!\nI0725 10:44:35.040303    1356 log.go:172] (0xc000996000) Data frame received for 3\nI0725 10:44:35.040337    1356 log.go:172] (0xc000a32140) (3) Data frame handling\nI0725 10:44:35.041703    1356 log.go:172] (0xc000996000) Data frame received for 1\nI0725 10:44:35.041721    1356 log.go:172] (0xc000a32000) (1) Data frame handling\nI0725 10:44:35.041731    1356 log.go:172] (0xc000a32000) (1) Data frame sent\nI0725 10:44:35.041916    1356 log.go:172] (0xc000996000) (0xc000a32000) Stream removed, broadcasting: 1\nI0725 10:44:35.041964    1356 log.go:172] (0xc000996000) Go away received\nI0725 10:44:35.042432    1356 log.go:172] (0xc000996000) (0xc000a32000) Stream removed, broadcasting: 1\nI0725 10:44:35.042457    1356 log.go:172] (0xc000996000) (0xc000a32140) Stream removed, broadcasting: 3\nI0725 10:44:35.042470    1356 log.go:172] (0xc000996000) (0xc0009c6000) Stream removed, broadcasting: 5\n"
Jul 25 10:44:35.047: INFO: stdout: ""
Jul 25 10:44:35.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9396 execpod8ls6g -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31079'
Jul 25 10:44:35.239: INFO: stderr: "I0725 10:44:35.165026    1375 log.go:172] (0xc000a58000) (0xc000560b40) Create stream\nI0725 10:44:35.165099    1375 log.go:172] (0xc000a58000) (0xc000560b40) Stream added, broadcasting: 1\nI0725 10:44:35.168156    1375 log.go:172] (0xc000a58000) Reply frame received for 1\nI0725 10:44:35.168204    1375 log.go:172] (0xc000a58000) (0xc0009bc000) Create stream\nI0725 10:44:35.168218    1375 log.go:172] (0xc000a58000) (0xc0009bc000) Stream added, broadcasting: 3\nI0725 10:44:35.169199    1375 log.go:172] (0xc000a58000) Reply frame received for 3\nI0725 10:44:35.169223    1375 log.go:172] (0xc000a58000) (0xc0009bc0a0) Create stream\nI0725 10:44:35.169230    1375 log.go:172] (0xc000a58000) (0xc0009bc0a0) Stream added, broadcasting: 5\nI0725 10:44:35.170427    1375 log.go:172] (0xc000a58000) Reply frame received for 5\nI0725 10:44:35.231056    1375 log.go:172] (0xc000a58000) Data frame received for 5\nI0725 10:44:35.231089    1375 log.go:172] (0xc0009bc0a0) (5) Data frame handling\nI0725 10:44:35.231112    1375 log.go:172] (0xc0009bc0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 31079\nConnection to 172.18.0.15 31079 port [tcp/31079] succeeded!\nI0725 10:44:35.231214    1375 log.go:172] (0xc000a58000) Data frame received for 3\nI0725 10:44:35.231249    1375 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0725 10:44:35.231398    1375 log.go:172] (0xc000a58000) Data frame received for 5\nI0725 10:44:35.231422    1375 log.go:172] (0xc0009bc0a0) (5) Data frame handling\nI0725 10:44:35.233241    1375 log.go:172] (0xc000a58000) Data frame received for 1\nI0725 10:44:35.233265    1375 log.go:172] (0xc000560b40) (1) Data frame handling\nI0725 10:44:35.233275    1375 log.go:172] (0xc000560b40) (1) Data frame sent\nI0725 10:44:35.233289    1375 log.go:172] (0xc000a58000) (0xc000560b40) Stream removed, broadcasting: 1\nI0725 10:44:35.233304    1375 log.go:172] (0xc000a58000) Go away received\nI0725 10:44:35.233759    1375 log.go:172] (0xc000a58000) (0xc000560b40) Stream removed, broadcasting: 1\nI0725 10:44:35.233784    1375 log.go:172] (0xc000a58000) (0xc0009bc000) Stream removed, broadcasting: 3\nI0725 10:44:35.233797    1375 log.go:172] (0xc000a58000) (0xc0009bc0a0) Stream removed, broadcasting: 5\n"
Jul 25 10:44:35.239: INFO: stdout: ""
Jul 25 10:44:35.239: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:44:35.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9396" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:17.066 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":61,"skipped":1343,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:44:35.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1489
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1489
STEP: Creating statefulset with conflicting port in namespace statefulset-1489
STEP: Waiting until pod test-pod will start running in namespace statefulset-1489
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1489
Jul 25 10:44:39.651: INFO: Observed stateful pod in namespace: statefulset-1489, name: ss-0, uid: bcafdd5b-d200-4d98-802c-a14c2873fb57, status phase: Pending. Waiting for statefulset controller to delete.
Jul 25 10:44:40.144: INFO: Observed stateful pod in namespace: statefulset-1489, name: ss-0, uid: bcafdd5b-d200-4d98-802c-a14c2873fb57, status phase: Failed. Waiting for statefulset controller to delete.
Jul 25 10:44:40.187: INFO: Observed stateful pod in namespace: statefulset-1489, name: ss-0, uid: bcafdd5b-d200-4d98-802c-a14c2873fb57, status phase: Failed. Waiting for statefulset controller to delete.
Jul 25 10:44:40.190: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1489
STEP: Removing pod with conflicting port in namespace statefulset-1489
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1489 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 25 10:44:46.316: INFO: Deleting all statefulset in ns statefulset-1489
Jul 25 10:44:46.318: INFO: Scaling statefulset ss to 0
Jul 25 10:44:56.338: INFO: Waiting for statefulset status.replicas updated to 0
Jul 25 10:44:56.341: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:44:56.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1489" for this suite.

• [SLOW TEST:21.037 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":62,"skipped":1369,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:44:56.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 25 10:44:56.605: INFO: Waiting up to 5m0s for pod "pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf" in namespace "emptydir-4576" to be "Succeeded or Failed"
Jul 25 10:44:56.609: INFO: Pod "pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.388149ms
Jul 25 10:44:58.612: INFO: Pod "pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007018845s
Jul 25 10:45:00.617: INFO: Pod "pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf": Phase="Running", Reason="", readiness=true. Elapsed: 4.011456523s
Jul 25 10:45:02.685: INFO: Pod "pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079210867s
STEP: Saw pod success
Jul 25 10:45:02.685: INFO: Pod "pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf" satisfied condition "Succeeded or Failed"
Jul 25 10:45:02.688: INFO: Trying to get logs from node kali-worker pod pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf container test-container: 
STEP: delete the pod
Jul 25 10:45:02.727: INFO: Waiting for pod pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf to disappear
Jul 25 10:45:02.774: INFO: Pod pod-25686554-ef9e-41ef-aa7e-54e75bcc24cf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:45:02.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4576" for this suite.

• [SLOW TEST:6.435 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1375,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:45:02.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 25 10:45:03.034: INFO: Waiting up to 5m0s for pod "downward-api-67713600-fea3-4ba4-ae9a-b2cea6dd7beb" in namespace "downward-api-9605" to be "Succeeded or Failed"
Jul 25 10:45:03.067: INFO: Pod "downward-api-67713600-fea3-4ba4-ae9a-b2cea6dd7beb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.563412ms
Jul 25 10:45:05.070: INFO: Pod "downward-api-67713600-fea3-4ba4-ae9a-b2cea6dd7beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036254802s
Jul 25 10:45:07.075: INFO: Pod "downward-api-67713600-fea3-4ba4-ae9a-b2cea6dd7beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040738983s
STEP: Saw pod success
Jul 25 10:45:07.075: INFO: Pod "downward-api-67713600-fea3-4ba4-ae9a-b2cea6dd7beb" satisfied condition "Succeeded or Failed"
Jul 25 10:45:07.078: INFO: Trying to get logs from node kali-worker pod downward-api-67713600-fea3-4ba4-ae9a-b2cea6dd7beb container dapi-container: 
STEP: delete the pod
Jul 25 10:45:07.344: INFO: Waiting for pod downward-api-67713600-fea3-4ba4-ae9a-b2cea6dd7beb to disappear
Jul 25 10:45:07.382: INFO: Pod downward-api-67713600-fea3-4ba4-ae9a-b2cea6dd7beb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:45:07.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9605" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1421,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:45:07.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul 25 10:45:07.462: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-a 58ada0e6-88a7-403e-83f4-34d45d42b0a7 4019419 0 2020-07-25 10:45:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:45:07.463: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-a 58ada0e6-88a7-403e-83f4-34d45d42b0a7 4019419 0 2020-07-25 10:45:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul 25 10:45:17.471: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-a 58ada0e6-88a7-403e-83f4-34d45d42b0a7 4019483 0 2020-07-25 10:45:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:45:17.471: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-a 58ada0e6-88a7-403e-83f4-34d45d42b0a7 4019483 0 2020-07-25 10:45:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul 25 10:45:27.552: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-a 58ada0e6-88a7-403e-83f4-34d45d42b0a7 4019558 0 2020-07-25 10:45:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:45:27.552: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-a 58ada0e6-88a7-403e-83f4-34d45d42b0a7 4019558 0 2020-07-25 10:45:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul 25 10:45:37.669: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-a 58ada0e6-88a7-403e-83f4-34d45d42b0a7 4019600 0 2020-07-25 10:45:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:45:37.669: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-a 58ada0e6-88a7-403e-83f4-34d45d42b0a7 4019600 0 2020-07-25 10:45:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul 25 10:45:47.678: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-b 0b3ce805-308b-4dd4-9b7c-ce3567e93ad3 4019654 0 2020-07-25 10:45:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:45:47.678: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-b 0b3ce805-308b-4dd4-9b7c-ce3567e93ad3 4019654 0 2020-07-25 10:45:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul 25 10:45:57.685: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-b 0b3ce805-308b-4dd4-9b7c-ce3567e93ad3 4019697 0 2020-07-25 10:45:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 10:45:57.686: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1718 /api/v1/namespaces/watch-1718/configmaps/e2e-watch-test-configmap-b 0b3ce805-308b-4dd4-9b7c-ce3567e93ad3 4019697 0 2020-07-25 10:45:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-07-25 10:45:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:46:07.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1718" for this suite.

• [SLOW TEST:60.301 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":65,"skipped":1429,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:46:07.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-99h56 in namespace proxy-3084
I0725 10:46:07.891216       7 runners.go:190] Created replication controller with name: proxy-service-99h56, namespace: proxy-3084, replica count: 1
I0725 10:46:08.941674       7 runners.go:190] proxy-service-99h56 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:46:09.941929       7 runners.go:190] proxy-service-99h56 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:46:10.942196       7 runners.go:190] proxy-service-99h56 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:46:11.942433       7 runners.go:190] proxy-service-99h56 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:46:12.942687       7 runners.go:190] proxy-service-99h56 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0725 10:46:13.943000       7 runners.go:190] proxy-service-99h56 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0725 10:46:14.943222       7 runners.go:190] proxy-service-99h56 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0725 10:46:15.943522       7 runners.go:190] proxy-service-99h56 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 25 10:46:15.947: INFO: setup took 8.09799803s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul 25 10:46:15.953: INFO: (0) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 6.147992ms)
Jul 25 10:46:15.954: INFO: (0) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 6.580436ms)
Jul 25 10:46:15.956: INFO: (0) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 8.773369ms)
Jul 25 10:46:15.956: INFO: (0) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 8.845067ms)
Jul 25 10:46:15.956: INFO: (0) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 8.884993ms)
Jul 25 10:46:15.956: INFO: (0) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 9.159305ms)
Jul 25 10:46:15.961: INFO: (0) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 14.248072ms)
Jul 25 10:46:15.961: INFO: (0) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 14.292658ms)
Jul 25 10:46:15.961: INFO: (0) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 14.400011ms)
Jul 25 10:46:15.961: INFO: (0) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 14.196811ms)
Jul 25 10:46:15.961: INFO: (0) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 14.278783ms)
Jul 25 10:46:15.964: INFO: (0) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 16.872139ms)
Jul 25 10:46:15.964: INFO: (0) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 16.70014ms)
Jul 25 10:46:15.964: INFO: (0) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 17.277988ms)
Jul 25 10:46:15.964: INFO: (0) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test (200; 5.347014ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 5.380494ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 5.504784ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 5.534721ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 5.572101ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 5.59182ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 5.619938ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 5.638439ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 5.82518ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 5.884445ms)
Jul 25 10:46:15.970: INFO: (1) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 5.930162ms)
Jul 25 10:46:15.975: INFO: (2) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 4.177892ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 5.051382ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 5.00685ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 5.054585ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 5.191444ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 5.159571ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 5.245092ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 5.192617ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 5.196714ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 5.167686ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 5.427613ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test<... (200; 5.660336ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 5.608262ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 5.620794ms)
Jul 25 10:46:15.976: INFO: (2) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 5.670487ms)
Jul 25 10:46:15.981: INFO: (3) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.901721ms)
Jul 25 10:46:15.981: INFO: (3) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 4.8674ms)
Jul 25 10:46:15.981: INFO: (3) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 4.770993ms)
Jul 25 10:46:15.982: INFO: (3) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 5.281702ms)
Jul 25 10:46:15.982: INFO: (3) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 5.204ms)
Jul 25 10:46:15.982: INFO: (3) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 5.689234ms)
Jul 25 10:46:15.982: INFO: (3) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 5.673856ms)
Jul 25 10:46:15.982: INFO: (3) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test (200; 5.945878ms)
Jul 25 10:46:15.983: INFO: (3) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 6.900533ms)
Jul 25 10:46:15.983: INFO: (3) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 6.939763ms)
Jul 25 10:46:15.983: INFO: (3) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 7.031859ms)
Jul 25 10:46:15.988: INFO: (4) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 3.13673ms)
Jul 25 10:46:15.988: INFO: (4) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.016853ms)
Jul 25 10:46:15.989: INFO: (4) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.923124ms)
Jul 25 10:46:15.989: INFO: (4) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.440236ms)
Jul 25 10:46:15.989: INFO: (4) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 4.777126ms)
Jul 25 10:46:15.990: INFO: (4) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 5.840262ms)
Jul 25 10:46:15.990: INFO: (4) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 6.070492ms)
Jul 25 10:46:15.991: INFO: (4) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 6.174251ms)
Jul 25 10:46:15.991: INFO: (4) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 6.162991ms)
Jul 25 10:46:15.991: INFO: (4) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 6.693033ms)
Jul 25 10:46:15.991: INFO: (4) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 7.24017ms)
Jul 25 10:46:15.991: INFO: (4) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 7.401479ms)
Jul 25 10:46:15.991: INFO: (4) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 7.111481ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 6.431897ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 6.505184ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 6.508421ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 6.5526ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test<... (200; 6.513485ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 6.547559ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 6.615587ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 6.576062ms)
Jul 25 10:46:15.998: INFO: (5) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 6.601749ms)
Jul 25 10:46:15.999: INFO: (5) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 8.155347ms)
Jul 25 10:46:15.999: INFO: (5) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 8.213883ms)
Jul 25 10:46:16.000: INFO: (5) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 8.430081ms)
Jul 25 10:46:16.000: INFO: (5) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 8.343517ms)
Jul 25 10:46:16.000: INFO: (5) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 8.387206ms)
Jul 25 10:46:16.000: INFO: (5) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 8.478377ms)
Jul 25 10:46:16.007: INFO: (6) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 7.161543ms)
Jul 25 10:46:16.008: INFO: (6) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 7.790973ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 9.303411ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 9.33741ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 9.264535ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 9.299486ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 9.383117ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 9.333834ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 9.499752ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 9.444545ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 9.418797ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 9.484297ms)
Jul 25 10:46:16.009: INFO: (6) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test (200; 4.893325ms)
Jul 25 10:46:16.015: INFO: (7) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 5.379737ms)
Jul 25 10:46:16.015: INFO: (7) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 5.443523ms)
Jul 25 10:46:16.015: INFO: (7) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 5.542298ms)
Jul 25 10:46:16.015: INFO: (7) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 5.446743ms)
Jul 25 10:46:16.015: INFO: (7) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 5.472688ms)
Jul 25 10:46:16.015: INFO: (7) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test<... (200; 4.453383ms)
Jul 25 10:46:16.020: INFO: (8) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.472574ms)
Jul 25 10:46:16.020: INFO: (8) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 4.49856ms)
Jul 25 10:46:16.020: INFO: (8) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 4.531348ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 4.56107ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.580548ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 4.537879ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 4.557019ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 4.660246ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 4.610012ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.678664ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 4.679787ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.692629ms)
Jul 25 10:46:16.021: INFO: (8) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 4.638776ms)
Jul 25 10:46:16.024: INFO: (9) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 2.915918ms)
Jul 25 10:46:16.024: INFO: (9) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 2.890122ms)
Jul 25 10:46:16.024: INFO: (9) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 2.981146ms)
Jul 25 10:46:16.024: INFO: (9) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 2.973398ms)
Jul 25 10:46:16.025: INFO: (9) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 4.766657ms)
Jul 25 10:46:16.026: INFO: (9) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 4.792341ms)
Jul 25 10:46:16.026: INFO: (9) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 5.345219ms)
Jul 25 10:46:16.026: INFO: (9) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 5.487497ms)
Jul 25 10:46:16.026: INFO: (9) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 5.570491ms)
Jul 25 10:46:16.026: INFO: (9) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 5.679035ms)
Jul 25 10:46:16.026: INFO: (9) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 5.654167ms)
Jul 25 10:46:16.026: INFO: (9) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 5.668593ms)
Jul 25 10:46:16.027: INFO: (9) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 5.80949ms)
Jul 25 10:46:16.027: INFO: (9) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 5.848578ms)
Jul 25 10:46:16.030: INFO: (10) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 3.72983ms)
Jul 25 10:46:16.031: INFO: (10) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 3.811224ms)
Jul 25 10:46:16.031: INFO: (10) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 3.877406ms)
Jul 25 10:46:16.031: INFO: (10) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 4.621438ms)
Jul 25 10:46:16.031: INFO: (10) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.612002ms)
Jul 25 10:46:16.031: INFO: (10) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 4.658786ms)
Jul 25 10:46:16.031: INFO: (10) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 4.69207ms)
Jul 25 10:46:16.031: INFO: (10) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 4.63779ms)
Jul 25 10:46:16.032: INFO: (10) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.881894ms)
Jul 25 10:46:16.033: INFO: (10) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 5.80643ms)
Jul 25 10:46:16.033: INFO: (10) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 5.931379ms)
Jul 25 10:46:16.033: INFO: (10) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 5.918293ms)
Jul 25 10:46:16.033: INFO: (10) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 6.00806ms)
Jul 25 10:46:16.035: INFO: (11) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 2.272093ms)
Jul 25 10:46:16.035: INFO: (11) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 2.266532ms)
Jul 25 10:46:16.036: INFO: (11) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 2.82133ms)
Jul 25 10:46:16.036: INFO: (11) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 3.278397ms)
Jul 25 10:46:16.036: INFO: (11) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 4.363071ms)
Jul 25 10:46:16.037: INFO: (11) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 4.373564ms)
Jul 25 10:46:16.037: INFO: (11) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 4.427727ms)
Jul 25 10:46:16.037: INFO: (11) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 4.568033ms)
Jul 25 10:46:16.038: INFO: (11) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 4.736509ms)
Jul 25 10:46:16.040: INFO: (12) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 2.857428ms)
Jul 25 10:46:16.040: INFO: (12) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test<... (200; 2.868451ms)
Jul 25 10:46:16.041: INFO: (12) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 2.966814ms)
Jul 25 10:46:16.041: INFO: (12) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 3.304591ms)
Jul 25 10:46:16.041: INFO: (12) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 3.556677ms)
Jul 25 10:46:16.041: INFO: (12) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 3.50856ms)
Jul 25 10:46:16.041: INFO: (12) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 3.556246ms)
Jul 25 10:46:16.041: INFO: (12) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 3.588588ms)
Jul 25 10:46:16.041: INFO: (12) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 3.685899ms)
Jul 25 10:46:16.042: INFO: (12) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 4.306325ms)
Jul 25 10:46:16.042: INFO: (12) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 4.434197ms)
Jul 25 10:46:16.042: INFO: (12) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 4.475149ms)
Jul 25 10:46:16.042: INFO: (12) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 4.418385ms)
Jul 25 10:46:16.042: INFO: (12) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 4.816197ms)
Jul 25 10:46:16.042: INFO: (12) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 4.55217ms)
Jul 25 10:46:16.046: INFO: (13) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 3.55342ms)
Jul 25 10:46:16.046: INFO: (13) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test<... (200; 4.340923ms)
Jul 25 10:46:16.047: INFO: (13) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 4.669246ms)
Jul 25 10:46:16.047: INFO: (13) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 4.696999ms)
Jul 25 10:46:16.047: INFO: (13) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 4.756538ms)
Jul 25 10:46:16.047: INFO: (13) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.716433ms)
Jul 25 10:46:16.050: INFO: (14) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 2.331119ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 3.652489ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 3.547799ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 3.575601ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 3.653959ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 3.608162ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 3.669362ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 3.751053ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 3.708476ms)
Jul 25 10:46:16.051: INFO: (14) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 3.580953ms)
Jul 25 10:46:16.052: INFO: (14) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 5.079674ms)
Jul 25 10:46:16.053: INFO: (14) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 5.148845ms)
Jul 25 10:46:16.053: INFO: (14) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 5.138481ms)
Jul 25 10:46:16.053: INFO: (14) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 5.251328ms)
Jul 25 10:46:16.053: INFO: (14) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 5.289949ms)
Jul 25 10:46:16.057: INFO: (15) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 4.401102ms)
Jul 25 10:46:16.057: INFO: (15) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.576763ms)
Jul 25 10:46:16.057: INFO: (15) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.561485ms)
Jul 25 10:46:16.057: INFO: (15) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 4.572883ms)
Jul 25 10:46:16.057: INFO: (15) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.587434ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 4.974245ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 4.903655ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test<... (200; 4.934329ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 5.035313ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 4.97872ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 4.98017ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 4.999334ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 5.065598ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 5.018353ms)
Jul 25 10:46:16.058: INFO: (15) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 5.050343ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 3.469514ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 3.480722ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 3.501083ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 3.5582ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 3.562349ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 3.715297ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 4.029198ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 3.948089ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 3.994207ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 3.985338ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:1080/proxy/: ... (200; 4.037201ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.281127ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.510183ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 4.39375ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 4.491217ms)
Jul 25 10:46:16.062: INFO: (16) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 3.183811ms)
Jul 25 10:46:16.066: INFO: (17) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 3.590231ms)
Jul 25 10:46:16.066: INFO: (17) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 3.652093ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 4.520603ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.537558ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 4.506148ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 4.482715ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 4.534162ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.576024ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 4.499836ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 4.540455ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.662214ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 4.646818ms)
Jul 25 10:46:16.067: INFO: (17) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: test (200; 6.146849ms)
Jul 25 10:46:16.074: INFO: (18) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 6.255398ms)
Jul 25 10:46:16.074: INFO: (18) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 6.225067ms)
Jul 25 10:46:16.074: INFO: (18) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 6.221852ms)
Jul 25 10:46:16.074: INFO: (18) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 6.223983ms)
Jul 25 10:46:16.074: INFO: (18) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 6.243124ms)
Jul 25 10:46:16.075: INFO: (18) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 6.9105ms)
Jul 25 10:46:16.075: INFO: (18) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 6.936543ms)
Jul 25 10:46:16.075: INFO: (18) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 6.995669ms)
Jul 25 10:46:16.075: INFO: (18) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 6.980742ms)
Jul 25 10:46:16.075: INFO: (18) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 7.100427ms)
Jul 25 10:46:16.075: INFO: (18) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 6.968271ms)
Jul 25 10:46:16.075: INFO: (18) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:160/proxy/: foo (200; 7.061673ms)
Jul 25 10:46:16.075: INFO: (18) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 7.079524ms)
Jul 25 10:46:16.079: INFO: (19) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:1080/proxy/: test<... (200; 3.987882ms)
Jul 25 10:46:16.079: INFO: (19) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname1/proxy/: foo (200; 4.142012ms)
Jul 25 10:46:16.079: INFO: (19) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname2/proxy/: bar (200; 4.576618ms)
Jul 25 10:46:16.079: INFO: (19) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx/proxy/: test (200; 4.658024ms)
Jul 25 10:46:16.079: INFO: (19) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:160/proxy/: foo (200; 4.691847ms)
Jul 25 10:46:16.079: INFO: (19) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname2/proxy/: tls qux (200; 4.756013ms)
Jul 25 10:46:16.079: INFO: (19) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:460/proxy/: tls baz (200; 4.70082ms)
Jul 25 10:46:16.080: INFO: (19) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:462/proxy/: tls qux (200; 4.708855ms)
Jul 25 10:46:16.080: INFO: (19) /api/v1/namespaces/proxy-3084/services/http:proxy-service-99h56:portname1/proxy/: foo (200; 4.71679ms)
Jul 25 10:46:16.080: INFO: (19) /api/v1/namespaces/proxy-3084/pods/http:proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.728907ms)
Jul 25 10:46:16.080: INFO: (19) /api/v1/namespaces/proxy-3084/services/https:proxy-service-99h56:tlsportname1/proxy/: tls baz (200; 4.820719ms)
Jul 25 10:46:16.080: INFO: (19) /api/v1/namespaces/proxy-3084/services/proxy-service-99h56:portname2/proxy/: bar (200; 4.786039ms)
Jul 25 10:46:16.080: INFO: (19) /api/v1/namespaces/proxy-3084/pods/proxy-service-99h56-pcpwx:162/proxy/: bar (200; 4.761917ms)
Jul 25 10:46:16.080: INFO: (19) /api/v1/namespaces/proxy-3084/pods/https:proxy-service-99h56-pcpwx:443/proxy/: ... (200; 5.243158ms)
STEP: deleting ReplicationController proxy-service-99h56 in namespace proxy-3084, will wait for the garbage collector to delete the pods
Jul 25 10:46:16.139: INFO: Deleting ReplicationController proxy-service-99h56 took: 6.834312ms
Jul 25 10:46:16.239: INFO: Terminating ReplicationController proxy-service-99h56 pods took: 100.220842ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:46:23.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3084" for this suite.

• [SLOW TEST:15.879 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":66,"skipped":1441,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:46:23.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jul 25 10:46:23.627: INFO: Created pod &Pod{ObjectMeta:{dns-5983  dns-5983 /api/v1/namespaces/dns-5983/pods/dns-5983 0d1f2ea1-f942-42a0-9d1a-7c77cbb405a6 4019843 0 2020-07-25 10:46:23 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-07-25 10:46:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vhp2k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vhp2k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vhp2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:46:23.639: INFO: The status of Pod dns-5983 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 10:46:25.901: INFO: The status of Pod dns-5983 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 10:46:27.643: INFO: The status of Pod dns-5983 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Jul 25 10:46:27.643: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5983 PodName:dns-5983 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:46:27.643: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:46:27.683061       7 log.go:172] (0xc0015922c0) (0xc0018e0960) Create stream
I0725 10:46:27.683097       7 log.go:172] (0xc0015922c0) (0xc0018e0960) Stream added, broadcasting: 1
I0725 10:46:27.684991       7 log.go:172] (0xc0015922c0) Reply frame received for 1
I0725 10:46:27.685030       7 log.go:172] (0xc0015922c0) (0xc00249cb40) Create stream
I0725 10:46:27.685043       7 log.go:172] (0xc0015922c0) (0xc00249cb40) Stream added, broadcasting: 3
I0725 10:46:27.686172       7 log.go:172] (0xc0015922c0) Reply frame received for 3
I0725 10:46:27.686226       7 log.go:172] (0xc0015922c0) (0xc0018e0a00) Create stream
I0725 10:46:27.686243       7 log.go:172] (0xc0015922c0) (0xc0018e0a00) Stream added, broadcasting: 5
I0725 10:46:27.687273       7 log.go:172] (0xc0015922c0) Reply frame received for 5
I0725 10:46:27.766230       7 log.go:172] (0xc0015922c0) Data frame received for 3
I0725 10:46:27.766264       7 log.go:172] (0xc00249cb40) (3) Data frame handling
I0725 10:46:27.766283       7 log.go:172] (0xc00249cb40) (3) Data frame sent
I0725 10:46:27.767284       7 log.go:172] (0xc0015922c0) Data frame received for 3
I0725 10:46:27.767308       7 log.go:172] (0xc00249cb40) (3) Data frame handling
I0725 10:46:27.767459       7 log.go:172] (0xc0015922c0) Data frame received for 5
I0725 10:46:27.767472       7 log.go:172] (0xc0018e0a00) (5) Data frame handling
I0725 10:46:27.769139       7 log.go:172] (0xc0015922c0) Data frame received for 1
I0725 10:46:27.769154       7 log.go:172] (0xc0018e0960) (1) Data frame handling
I0725 10:46:27.769169       7 log.go:172] (0xc0018e0960) (1) Data frame sent
I0725 10:46:27.769197       7 log.go:172] (0xc0015922c0) (0xc0018e0960) Stream removed, broadcasting: 1
I0725 10:46:27.769293       7 log.go:172] (0xc0015922c0) (0xc0018e0960) Stream removed, broadcasting: 1
I0725 10:46:27.769307       7 log.go:172] (0xc0015922c0) (0xc00249cb40) Stream removed, broadcasting: 3
I0725 10:46:27.769314       7 log.go:172] (0xc0015922c0) (0xc0018e0a00) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
I0725 10:46:27.769338       7 log.go:172] (0xc0015922c0) Go away received
Jul 25 10:46:27.769: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5983 PodName:dns-5983 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 10:46:27.769: INFO: >>> kubeConfig: /root/.kube/config
I0725 10:46:27.795994       7 log.go:172] (0xc002720370) (0xc00249cdc0) Create stream
I0725 10:46:27.796033       7 log.go:172] (0xc002720370) (0xc00249cdc0) Stream added, broadcasting: 1
I0725 10:46:27.800608       7 log.go:172] (0xc002720370) Reply frame received for 1
I0725 10:46:27.800647       7 log.go:172] (0xc002720370) (0xc001ae8280) Create stream
I0725 10:46:27.800661       7 log.go:172] (0xc002720370) (0xc001ae8280) Stream added, broadcasting: 3
I0725 10:46:27.802924       7 log.go:172] (0xc002720370) Reply frame received for 3
I0725 10:46:27.802977       7 log.go:172] (0xc002720370) (0xc001ae88c0) Create stream
I0725 10:46:27.802991       7 log.go:172] (0xc002720370) (0xc001ae88c0) Stream added, broadcasting: 5
I0725 10:46:27.804164       7 log.go:172] (0xc002720370) Reply frame received for 5
I0725 10:46:27.867393       7 log.go:172] (0xc002720370) Data frame received for 3
I0725 10:46:27.867423       7 log.go:172] (0xc001ae8280) (3) Data frame handling
I0725 10:46:27.867440       7 log.go:172] (0xc001ae8280) (3) Data frame sent
I0725 10:46:27.867864       7 log.go:172] (0xc002720370) Data frame received for 3
I0725 10:46:27.867892       7 log.go:172] (0xc001ae8280) (3) Data frame handling
I0725 10:46:27.867936       7 log.go:172] (0xc002720370) Data frame received for 5
I0725 10:46:27.867974       7 log.go:172] (0xc001ae88c0) (5) Data frame handling
I0725 10:46:27.869558       7 log.go:172] (0xc002720370) Data frame received for 1
I0725 10:46:27.869579       7 log.go:172] (0xc00249cdc0) (1) Data frame handling
I0725 10:46:27.869590       7 log.go:172] (0xc00249cdc0) (1) Data frame sent
I0725 10:46:27.869611       7 log.go:172] (0xc002720370) (0xc00249cdc0) Stream removed, broadcasting: 1
I0725 10:46:27.869631       7 log.go:172] (0xc002720370) Go away received
I0725 10:46:27.869756       7 log.go:172] (0xc002720370) (0xc00249cdc0) Stream removed, broadcasting: 1
I0725 10:46:27.869777       7 log.go:172] (0xc002720370) (0xc001ae8280) Stream removed, broadcasting: 3
I0725 10:46:27.869787       7 log.go:172] (0xc002720370) (0xc001ae88c0) Stream removed, broadcasting: 5
Jul 25 10:46:27.869: INFO: Deleting pod dns-5983...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:46:27.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5983" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":67,"skipped":1466,"failed":0}
SS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:46:27.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:46:28.030: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-638c6431-b87a-4652-b26e-375af9eb8576" in namespace "security-context-test-810" to be "Succeeded or Failed"
Jul 25 10:46:28.279: INFO: Pod "busybox-readonly-false-638c6431-b87a-4652-b26e-375af9eb8576": Phase="Pending", Reason="", readiness=false. Elapsed: 248.887717ms
Jul 25 10:46:30.282: INFO: Pod "busybox-readonly-false-638c6431-b87a-4652-b26e-375af9eb8576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25278266s
Jul 25 10:46:32.344: INFO: Pod "busybox-readonly-false-638c6431-b87a-4652-b26e-375af9eb8576": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314686837s
Jul 25 10:46:34.348: INFO: Pod "busybox-readonly-false-638c6431-b87a-4652-b26e-375af9eb8576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.318138357s
Jul 25 10:46:34.348: INFO: Pod "busybox-readonly-false-638c6431-b87a-4652-b26e-375af9eb8576" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:46:34.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-810" for this suite.

• [SLOW TEST:6.445 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1468,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:46:34.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:46:34.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9411
I0725 10:46:34.457296       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9411, replica count: 1
I0725 10:46:35.507740       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:46:36.507967       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:46:37.508191       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 10:46:38.508390       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 25 10:46:38.686: INFO: Created: latency-svc-qzn9v
Jul 25 10:46:38.722: INFO: Got endpoints: latency-svc-qzn9v [113.501199ms]
Jul 25 10:46:39.287: INFO: Created: latency-svc-fl27g
Jul 25 10:46:39.351: INFO: Got endpoints: latency-svc-fl27g [628.78731ms]
Jul 25 10:46:39.592: INFO: Created: latency-svc-xw9k6
Jul 25 10:46:39.676: INFO: Got endpoints: latency-svc-xw9k6 [954.084452ms]
Jul 25 10:46:39.728: INFO: Created: latency-svc-8v6r2
Jul 25 10:46:39.735: INFO: Got endpoints: latency-svc-8v6r2 [1.013532895s]
Jul 25 10:46:39.884: INFO: Created: latency-svc-ldck5
Jul 25 10:46:39.944: INFO: Got endpoints: latency-svc-ldck5 [1.22247858s]
Jul 25 10:46:39.957: INFO: Created: latency-svc-9wtp6
Jul 25 10:46:39.970: INFO: Got endpoints: latency-svc-9wtp6 [1.248030284s]
Jul 25 10:46:40.028: INFO: Created: latency-svc-648t5
Jul 25 10:46:40.036: INFO: Got endpoints: latency-svc-648t5 [1.313926403s]
Jul 25 10:46:40.055: INFO: Created: latency-svc-fdds9
Jul 25 10:46:40.072: INFO: Got endpoints: latency-svc-fdds9 [1.350513408s]
Jul 25 10:46:40.171: INFO: Created: latency-svc-4mvww
Jul 25 10:46:40.194: INFO: Got endpoints: latency-svc-4mvww [1.472216814s]
Jul 25 10:46:40.235: INFO: Created: latency-svc-n95nk
Jul 25 10:46:40.246: INFO: Got endpoints: latency-svc-n95nk [1.524520532s]
Jul 25 10:46:40.327: INFO: Created: latency-svc-p8nkf
Jul 25 10:46:40.343: INFO: Got endpoints: latency-svc-p8nkf [1.621661789s]
Jul 25 10:46:40.416: INFO: Created: latency-svc-x46mk
Jul 25 10:46:40.501: INFO: Got endpoints: latency-svc-x46mk [254.047133ms]
Jul 25 10:46:40.555: INFO: Created: latency-svc-qqf6z
Jul 25 10:46:40.591: INFO: Got endpoints: latency-svc-qqf6z [1.869221015s]
Jul 25 10:46:40.661: INFO: Created: latency-svc-7bm8h
Jul 25 10:46:40.685: INFO: Got endpoints: latency-svc-7bm8h [1.96354743s]
Jul 25 10:46:40.727: INFO: Created: latency-svc-cmv45
Jul 25 10:46:40.817: INFO: Got endpoints: latency-svc-cmv45 [2.095563424s]
Jul 25 10:46:41.171: INFO: Created: latency-svc-plcnp
Jul 25 10:46:41.273: INFO: Got endpoints: latency-svc-plcnp [2.551509398s]
Jul 25 10:46:41.323: INFO: Created: latency-svc-qvw8n
Jul 25 10:46:41.357: INFO: Got endpoints: latency-svc-qvw8n [2.635255791s]
Jul 25 10:46:41.459: INFO: Created: latency-svc-mnvqn
Jul 25 10:46:41.476: INFO: Got endpoints: latency-svc-mnvqn [2.125096717s]
Jul 25 10:46:41.521: INFO: Created: latency-svc-phn2g
Jul 25 10:46:41.650: INFO: Got endpoints: latency-svc-phn2g [1.974246427s]
Jul 25 10:46:41.657: INFO: Created: latency-svc-dmszx
Jul 25 10:46:41.701: INFO: Got endpoints: latency-svc-dmszx [1.966165074s]
Jul 25 10:46:41.731: INFO: Created: latency-svc-vwbr4
Jul 25 10:46:41.734: INFO: Got endpoints: latency-svc-vwbr4 [1.789920916s]
Jul 25 10:46:41.783: INFO: Created: latency-svc-mllrq
Jul 25 10:46:41.788: INFO: Got endpoints: latency-svc-mllrq [1.818320986s]
Jul 25 10:46:41.845: INFO: Created: latency-svc-wb2qc
Jul 25 10:46:41.861: INFO: Got endpoints: latency-svc-wb2qc [1.825365118s]
Jul 25 10:46:41.942: INFO: Created: latency-svc-pb5cm
Jul 25 10:46:41.964: INFO: Got endpoints: latency-svc-pb5cm [1.89151319s]
Jul 25 10:46:41.984: INFO: Created: latency-svc-plwdq
Jul 25 10:46:41.999: INFO: Got endpoints: latency-svc-plwdq [1.805001989s]
Jul 25 10:46:42.019: INFO: Created: latency-svc-brrfv
Jul 25 10:46:42.070: INFO: Got endpoints: latency-svc-brrfv [1.726055301s]
Jul 25 10:46:42.090: INFO: Created: latency-svc-hzzc4
Jul 25 10:46:42.102: INFO: Got endpoints: latency-svc-hzzc4 [1.60113397s]
Jul 25 10:46:42.146: INFO: Created: latency-svc-mdx77
Jul 25 10:46:42.162: INFO: Got endpoints: latency-svc-mdx77 [1.570872683s]
Jul 25 10:46:42.206: INFO: Created: latency-svc-j5ljc
Jul 25 10:46:42.237: INFO: Got endpoints: latency-svc-j5ljc [1.551474466s]
Jul 25 10:46:42.238: INFO: Created: latency-svc-nprsk
Jul 25 10:46:42.276: INFO: Got endpoints: latency-svc-nprsk [1.458239843s]
Jul 25 10:46:42.364: INFO: Created: latency-svc-jt75q
Jul 25 10:46:42.372: INFO: Got endpoints: latency-svc-jt75q [1.098577376s]
Jul 25 10:46:42.410: INFO: Created: latency-svc-s96vk
Jul 25 10:46:42.451: INFO: Got endpoints: latency-svc-s96vk [1.094209807s]
Jul 25 10:46:42.521: INFO: Created: latency-svc-frk6r
Jul 25 10:46:42.546: INFO: Got endpoints: latency-svc-frk6r [1.070565046s]
Jul 25 10:46:42.547: INFO: Created: latency-svc-gvx95
Jul 25 10:46:42.596: INFO: Got endpoints: latency-svc-gvx95 [945.252813ms]
Jul 25 10:46:42.701: INFO: Created: latency-svc-df5dx
Jul 25 10:46:42.710: INFO: Got endpoints: latency-svc-df5dx [1.008349595s]
Jul 25 10:46:42.733: INFO: Created: latency-svc-qvmfr
Jul 25 10:46:42.760: INFO: Got endpoints: latency-svc-qvmfr [1.02550644s]
Jul 25 10:46:42.860: INFO: Created: latency-svc-dj4v9
Jul 25 10:46:42.878: INFO: Got endpoints: latency-svc-dj4v9 [1.089980619s]
Jul 25 10:46:42.895: INFO: Created: latency-svc-9xz2h
Jul 25 10:46:42.918: INFO: Got endpoints: latency-svc-9xz2h [1.057187653s]
Jul 25 10:46:42.973: INFO: Created: latency-svc-pql89
Jul 25 10:46:42.982: INFO: Got endpoints: latency-svc-pql89 [1.017794475s]
Jul 25 10:46:43.008: INFO: Created: latency-svc-rhx8p
Jul 25 10:46:43.023: INFO: Got endpoints: latency-svc-rhx8p [1.023383262s]
Jul 25 10:46:43.069: INFO: Created: latency-svc-qsfvx
Jul 25 10:46:43.105: INFO: Got endpoints: latency-svc-qsfvx [1.034941365s]
Jul 25 10:46:43.111: INFO: Created: latency-svc-kj97w
Jul 25 10:46:43.126: INFO: Got endpoints: latency-svc-kj97w [1.024347417s]
Jul 25 10:46:43.261: INFO: Created: latency-svc-2mb95
Jul 25 10:46:43.292: INFO: Created: latency-svc-5fzl5
Jul 25 10:46:43.292: INFO: Got endpoints: latency-svc-2mb95 [1.129602105s]
Jul 25 10:46:43.321: INFO: Got endpoints: latency-svc-5fzl5 [1.084244307s]
Jul 25 10:46:43.357: INFO: Created: latency-svc-h7msk
Jul 25 10:46:43.392: INFO: Got endpoints: latency-svc-h7msk [1.116362294s]
Jul 25 10:46:43.416: INFO: Created: latency-svc-zs9vg
Jul 25 10:46:43.445: INFO: Got endpoints: latency-svc-zs9vg [1.072701808s]
Jul 25 10:46:43.537: INFO: Created: latency-svc-2b2j9
Jul 25 10:46:43.568: INFO: Got endpoints: latency-svc-2b2j9 [1.116724964s]
Jul 25 10:46:43.598: INFO: Created: latency-svc-lrdlx
Jul 25 10:46:43.608: INFO: Got endpoints: latency-svc-lrdlx [1.061538766s]
Jul 25 10:46:43.740: INFO: Created: latency-svc-b6nrh
Jul 25 10:46:43.751: INFO: Got endpoints: latency-svc-b6nrh [1.155764483s]
Jul 25 10:46:43.789: INFO: Created: latency-svc-wbqff
Jul 25 10:46:43.806: INFO: Got endpoints: latency-svc-wbqff [1.096165494s]
Jul 25 10:46:43.825: INFO: Created: latency-svc-krqpg
Jul 25 10:46:43.836: INFO: Got endpoints: latency-svc-krqpg [1.076278805s]
Jul 25 10:46:43.908: INFO: Created: latency-svc-smzpp
Jul 25 10:46:43.932: INFO: Got endpoints: latency-svc-smzpp [1.053503609s]
Jul 25 10:46:43.956: INFO: Created: latency-svc-7whxn
Jul 25 10:46:43.969: INFO: Got endpoints: latency-svc-7whxn [1.050488355s]
Jul 25 10:46:43.989: INFO: Created: latency-svc-qph7c
Jul 25 10:46:44.039: INFO: Got endpoints: latency-svc-qph7c [1.056849341s]
Jul 25 10:46:44.041: INFO: Created: latency-svc-r2t82
Jul 25 10:46:44.054: INFO: Got endpoints: latency-svc-r2t82 [1.03164631s]
Jul 25 10:46:44.070: INFO: Created: latency-svc-jz9k2
Jul 25 10:46:44.083: INFO: Got endpoints: latency-svc-jz9k2 [978.828846ms]
Jul 25 10:46:44.100: INFO: Created: latency-svc-x9ptp
Jul 25 10:46:44.125: INFO: Got endpoints: latency-svc-x9ptp [998.622436ms]
Jul 25 10:46:44.171: INFO: Created: latency-svc-nwdg9
Jul 25 10:46:44.186: INFO: Got endpoints: latency-svc-nwdg9 [894.497611ms]
Jul 25 10:46:44.203: INFO: Created: latency-svc-z2ngl
Jul 25 10:46:44.228: INFO: Got endpoints: latency-svc-z2ngl [906.966679ms]
Jul 25 10:46:44.250: INFO: Created: latency-svc-zwzd9
Jul 25 10:46:44.308: INFO: Got endpoints: latency-svc-zwzd9 [916.060993ms]
Jul 25 10:46:44.335: INFO: Created: latency-svc-8dgv8
Jul 25 10:46:44.349: INFO: Got endpoints: latency-svc-8dgv8 [904.030962ms]
Jul 25 10:46:44.370: INFO: Created: latency-svc-pcdhf
Jul 25 10:46:44.385: INFO: Got endpoints: latency-svc-pcdhf [817.053035ms]
Jul 25 10:46:44.458: INFO: Created: latency-svc-n8676
Jul 25 10:46:44.486: INFO: Got endpoints: latency-svc-n8676 [877.635812ms]
Jul 25 10:46:44.486: INFO: Created: latency-svc-bp8jg
Jul 25 10:46:44.528: INFO: Got endpoints: latency-svc-bp8jg [776.190551ms]
Jul 25 10:46:44.593: INFO: Created: latency-svc-r9qzv
Jul 25 10:46:44.617: INFO: Got endpoints: latency-svc-r9qzv [811.081215ms]
Jul 25 10:46:44.653: INFO: Created: latency-svc-l4qgf
Jul 25 10:46:44.675: INFO: Got endpoints: latency-svc-l4qgf [838.789285ms]
Jul 25 10:46:44.738: INFO: Created: latency-svc-vh7jg
Jul 25 10:46:44.780: INFO: Got endpoints: latency-svc-vh7jg [848.201178ms]
Jul 25 10:46:44.813: INFO: Created: latency-svc-d7p9z
Jul 25 10:46:44.949: INFO: Got endpoints: latency-svc-d7p9z [979.809337ms]
Jul 25 10:46:45.220: INFO: Created: latency-svc-frxk4
Jul 25 10:46:45.238: INFO: Got endpoints: latency-svc-frxk4 [1.198847374s]
Jul 25 10:46:45.626: INFO: Created: latency-svc-lvbd7
Jul 25 10:46:45.633: INFO: Got endpoints: latency-svc-lvbd7 [1.578659825s]
Jul 25 10:46:45.776: INFO: Created: latency-svc-7ktw6
Jul 25 10:46:45.813: INFO: Got endpoints: latency-svc-7ktw6 [1.729657118s]
Jul 25 10:46:46.127: INFO: Created: latency-svc-9nvnd
Jul 25 10:46:46.316: INFO: Got endpoints: latency-svc-9nvnd [2.190914137s]
Jul 25 10:46:46.506: INFO: Created: latency-svc-xfcg5
Jul 25 10:46:46.569: INFO: Got endpoints: latency-svc-xfcg5 [2.382427672s]
Jul 25 10:46:46.842: INFO: Created: latency-svc-zkjxj
Jul 25 10:46:46.858: INFO: Got endpoints: latency-svc-zkjxj [2.629160345s]
Jul 25 10:46:47.203: INFO: Created: latency-svc-kwrp6
Jul 25 10:46:47.246: INFO: Got endpoints: latency-svc-kwrp6 [2.937682527s]
Jul 25 10:46:47.346: INFO: Created: latency-svc-v4fpf
Jul 25 10:46:47.372: INFO: Got endpoints: latency-svc-v4fpf [3.022723205s]
Jul 25 10:46:47.435: INFO: Created: latency-svc-w7kn7
Jul 25 10:46:47.542: INFO: Got endpoints: latency-svc-w7kn7 [3.156685115s]
Jul 25 10:46:47.543: INFO: Created: latency-svc-pgv92
Jul 25 10:46:47.571: INFO: Got endpoints: latency-svc-pgv92 [3.085233811s]
Jul 25 10:46:47.698: INFO: Created: latency-svc-ptts6
Jul 25 10:46:47.784: INFO: Got endpoints: latency-svc-ptts6 [3.256076105s]
Jul 25 10:46:47.785: INFO: Created: latency-svc-mdzgl
Jul 25 10:46:47.901: INFO: Got endpoints: latency-svc-mdzgl [3.284079755s]
Jul 25 10:46:47.941: INFO: Created: latency-svc-dw8ds
Jul 25 10:46:48.069: INFO: Got endpoints: latency-svc-dw8ds [3.393802951s]
Jul 25 10:46:48.139: INFO: Created: latency-svc-bvckm
Jul 25 10:46:48.238: INFO: Got endpoints: latency-svc-bvckm [3.457594777s]
Jul 25 10:46:48.265: INFO: Created: latency-svc-4zvgn
Jul 25 10:46:48.285: INFO: Got endpoints: latency-svc-4zvgn [3.336232405s]
Jul 25 10:46:48.524: INFO: Created: latency-svc-sd72h
Jul 25 10:46:48.991: INFO: Got endpoints: latency-svc-sd72h [3.753796882s]
Jul 25 10:46:49.327: INFO: Created: latency-svc-zpfwd
Jul 25 10:46:49.422: INFO: Got endpoints: latency-svc-zpfwd [3.789372705s]
Jul 25 10:46:49.590: INFO: Created: latency-svc-bh5wn
Jul 25 10:46:49.609: INFO: Got endpoints: latency-svc-bh5wn [3.796226512s]
Jul 25 10:46:49.658: INFO: Created: latency-svc-j7k24
Jul 25 10:46:49.682: INFO: Got endpoints: latency-svc-j7k24 [3.365841235s]
Jul 25 10:46:49.770: INFO: Created: latency-svc-2cpmz
Jul 25 10:46:49.826: INFO: Got endpoints: latency-svc-2cpmz [3.257046039s]
Jul 25 10:46:49.907: INFO: Created: latency-svc-rxxc9
Jul 25 10:46:49.911: INFO: Got endpoints: latency-svc-rxxc9 [3.053305145s]
Jul 25 10:46:50.093: INFO: Created: latency-svc-lb6b7
Jul 25 10:46:50.390: INFO: Got endpoints: latency-svc-lb6b7 [3.143980422s]
Jul 25 10:46:50.558: INFO: Created: latency-svc-277nm
Jul 25 10:46:50.569: INFO: Got endpoints: latency-svc-277nm [3.197280577s]
Jul 25 10:46:50.642: INFO: Created: latency-svc-2gpfk
Jul 25 10:46:50.725: INFO: Created: latency-svc-vddhl
Jul 25 10:46:50.738: INFO: Got endpoints: latency-svc-2gpfk [3.195885502s]
Jul 25 10:46:50.741: INFO: Got endpoints: latency-svc-vddhl [3.170268132s]
Jul 25 10:46:50.793: INFO: Created: latency-svc-bmpsl
Jul 25 10:46:51.063: INFO: Got endpoints: latency-svc-bmpsl [3.278813863s]
Jul 25 10:46:51.153: INFO: Created: latency-svc-4qg69
Jul 25 10:46:51.201: INFO: Got endpoints: latency-svc-4qg69 [3.2996211s]
Jul 25 10:46:51.218: INFO: Created: latency-svc-6mrj5
Jul 25 10:46:51.242: INFO: Got endpoints: latency-svc-6mrj5 [3.173509701s]
Jul 25 10:46:51.286: INFO: Created: latency-svc-nvh59
Jul 25 10:46:51.374: INFO: Got endpoints: latency-svc-nvh59 [3.136277693s]
Jul 25 10:46:51.386: INFO: Created: latency-svc-xw4fd
Jul 25 10:46:51.417: INFO: Got endpoints: latency-svc-xw4fd [3.131958095s]
Jul 25 10:46:51.542: INFO: Created: latency-svc-phxjk
Jul 25 10:46:51.549: INFO: Got endpoints: latency-svc-phxjk [2.557485001s]
Jul 25 10:46:51.604: INFO: Created: latency-svc-5k954
Jul 25 10:46:51.633: INFO: Got endpoints: latency-svc-5k954 [2.210636422s]
Jul 25 10:46:51.691: INFO: Created: latency-svc-wv894
Jul 25 10:46:51.753: INFO: Got endpoints: latency-svc-wv894 [2.143130385s]
Jul 25 10:46:51.913: INFO: Created: latency-svc-n26tn
Jul 25 10:46:51.963: INFO: Got endpoints: latency-svc-n26tn [2.28175403s]
Jul 25 10:46:52.064: INFO: Created: latency-svc-njdns
Jul 25 10:46:52.113: INFO: Got endpoints: latency-svc-njdns [2.287111345s]
Jul 25 10:46:52.132: INFO: Created: latency-svc-rhpx2
Jul 25 10:46:52.149: INFO: Got endpoints: latency-svc-rhpx2 [2.23852099s]
Jul 25 10:46:52.219: INFO: Created: latency-svc-9l96q
Jul 25 10:46:52.238: INFO: Got endpoints: latency-svc-9l96q [1.847705036s]
Jul 25 10:46:52.259: INFO: Created: latency-svc-v9blg
Jul 25 10:46:52.276: INFO: Got endpoints: latency-svc-v9blg [1.706659174s]
Jul 25 10:46:52.399: INFO: Created: latency-svc-t9846
Jul 25 10:46:52.403: INFO: Got endpoints: latency-svc-t9846 [1.664862565s]
Jul 25 10:46:52.561: INFO: Created: latency-svc-wbc92
Jul 25 10:46:52.626: INFO: Got endpoints: latency-svc-wbc92 [1.884351332s]
Jul 25 10:46:52.974: INFO: Created: latency-svc-vxcgw
Jul 25 10:46:53.055: INFO: Got endpoints: latency-svc-vxcgw [1.992280549s]
Jul 25 10:46:53.148: INFO: Created: latency-svc-cksrj
Jul 25 10:46:53.165: INFO: Got endpoints: latency-svc-cksrj [1.964295061s]
Jul 25 10:46:53.514: INFO: Created: latency-svc-g66b5
Jul 25 10:46:53.589: INFO: Got endpoints: latency-svc-g66b5 [2.346913842s]
Jul 25 10:46:53.710: INFO: Created: latency-svc-r5n7m
Jul 25 10:46:53.740: INFO: Got endpoints: latency-svc-r5n7m [2.365599738s]
Jul 25 10:46:53.866: INFO: Created: latency-svc-cwfb6
Jul 25 10:46:53.886: INFO: Got endpoints: latency-svc-cwfb6 [2.468543914s]
Jul 25 10:46:53.928: INFO: Created: latency-svc-kdktl
Jul 25 10:46:53.944: INFO: Got endpoints: latency-svc-kdktl [2.394975901s]
Jul 25 10:46:54.015: INFO: Created: latency-svc-trkj5
Jul 25 10:46:54.084: INFO: Got endpoints: latency-svc-trkj5 [2.450878201s]
Jul 25 10:46:54.085: INFO: Created: latency-svc-jt2rx
Jul 25 10:46:54.165: INFO: Got endpoints: latency-svc-jt2rx [2.412342399s]
Jul 25 10:46:54.314: INFO: Created: latency-svc-rphzf
Jul 25 10:46:54.386: INFO: Got endpoints: latency-svc-rphzf [2.422143062s]
Jul 25 10:46:54.529: INFO: Created: latency-svc-vh6wf
Jul 25 10:46:54.583: INFO: Got endpoints: latency-svc-vh6wf [2.469617329s]
Jul 25 10:46:54.680: INFO: Created: latency-svc-d5dv4
Jul 25 10:46:54.716: INFO: Got endpoints: latency-svc-d5dv4 [2.566623735s]
Jul 25 10:46:54.718: INFO: Created: latency-svc-6qfns
Jul 25 10:46:54.724: INFO: Got endpoints: latency-svc-6qfns [2.485893276s]
Jul 25 10:46:54.829: INFO: Created: latency-svc-v7vbg
Jul 25 10:46:54.834: INFO: Got endpoints: latency-svc-v7vbg [2.558027096s]
Jul 25 10:46:54.870: INFO: Created: latency-svc-d2xv9
Jul 25 10:46:54.894: INFO: Got endpoints: latency-svc-d2xv9 [2.491179458s]
Jul 25 10:46:54.917: INFO: Created: latency-svc-4hzpb
Jul 25 10:46:54.967: INFO: Got endpoints: latency-svc-4hzpb [2.341429855s]
Jul 25 10:46:54.995: INFO: Created: latency-svc-f2qtd
Jul 25 10:46:55.008: INFO: Got endpoints: latency-svc-f2qtd [1.953006711s]
Jul 25 10:46:55.061: INFO: Created: latency-svc-klrhg
Jul 25 10:46:55.129: INFO: Got endpoints: latency-svc-klrhg [1.963923702s]
Jul 25 10:46:55.137: INFO: Created: latency-svc-2tdk5
Jul 25 10:46:55.147: INFO: Got endpoints: latency-svc-2tdk5 [1.557362005s]
Jul 25 10:46:55.204: INFO: Created: latency-svc-9xk57
Jul 25 10:46:55.219: INFO: Got endpoints: latency-svc-9xk57 [1.47976225s]
Jul 25 10:46:55.519: INFO: Created: latency-svc-6pqw7
Jul 25 10:46:55.544: INFO: Got endpoints: latency-svc-6pqw7 [1.657970271s]
Jul 25 10:46:55.600: INFO: Created: latency-svc-8fn9x
Jul 25 10:46:55.739: INFO: Got endpoints: latency-svc-8fn9x [1.795061034s]
Jul 25 10:46:55.746: INFO: Created: latency-svc-fjgps
Jul 25 10:46:55.766: INFO: Got endpoints: latency-svc-fjgps [1.682160957s]
Jul 25 10:46:55.798: INFO: Created: latency-svc-p7vbm
Jul 25 10:46:55.814: INFO: Got endpoints: latency-svc-p7vbm [1.64875073s]
Jul 25 10:46:55.835: INFO: Created: latency-svc-8s66t
Jul 25 10:46:55.871: INFO: Got endpoints: latency-svc-8s66t [1.485177672s]
Jul 25 10:46:55.877: INFO: Created: latency-svc-mgcfb
Jul 25 10:46:55.893: INFO: Got endpoints: latency-svc-mgcfb [1.310486441s]
Jul 25 10:46:56.129: INFO: Created: latency-svc-tt2bx
Jul 25 10:46:56.157: INFO: Got endpoints: latency-svc-tt2bx [1.440607215s]
Jul 25 10:46:56.303: INFO: Created: latency-svc-xm94j
Jul 25 10:46:56.311: INFO: Got endpoints: latency-svc-xm94j [1.587280927s]
Jul 25 10:46:56.339: INFO: Created: latency-svc-26hc2
Jul 25 10:46:56.355: INFO: Got endpoints: latency-svc-26hc2 [1.521074095s]
Jul 25 10:46:56.375: INFO: Created: latency-svc-mgt67
Jul 25 10:46:56.385: INFO: Got endpoints: latency-svc-mgt67 [1.49097674s]
Jul 25 10:46:56.465: INFO: Created: latency-svc-gld5h
Jul 25 10:46:56.482: INFO: Got endpoints: latency-svc-gld5h [1.515070913s]
Jul 25 10:46:56.500: INFO: Created: latency-svc-p6lqc
Jul 25 10:46:56.516: INFO: Got endpoints: latency-svc-p6lqc [1.50812532s]
Jul 25 10:46:56.578: INFO: Created: latency-svc-664gl
Jul 25 10:46:56.598: INFO: Got endpoints: latency-svc-664gl [1.46876341s]
Jul 25 10:46:56.657: INFO: Created: latency-svc-wfnrw
Jul 25 10:46:56.728: INFO: Got endpoints: latency-svc-wfnrw [1.580719591s]
Jul 25 10:46:56.740: INFO: Created: latency-svc-m7hvk
Jul 25 10:46:56.769: INFO: Got endpoints: latency-svc-m7hvk [1.549117092s]
Jul 25 10:46:56.796: INFO: Created: latency-svc-b25b7
Jul 25 10:46:56.811: INFO: Got endpoints: latency-svc-b25b7 [1.267089754s]
Jul 25 10:46:56.902: INFO: Created: latency-svc-j5dkc
Jul 25 10:46:56.907: INFO: Got endpoints: latency-svc-j5dkc [1.167793712s]
Jul 25 10:46:56.986: INFO: Created: latency-svc-tpmb2
Jul 25 10:46:57.051: INFO: Got endpoints: latency-svc-tpmb2 [1.284367529s]
Jul 25 10:46:57.077: INFO: Created: latency-svc-8f28k
Jul 25 10:46:57.124: INFO: Got endpoints: latency-svc-8f28k [1.310000282s]
Jul 25 10:46:57.186: INFO: Created: latency-svc-75gbv
Jul 25 10:46:57.234: INFO: Got endpoints: latency-svc-75gbv [1.363144054s]
Jul 25 10:46:57.399: INFO: Created: latency-svc-nfjh9
Jul 25 10:46:57.572: INFO: Got endpoints: latency-svc-nfjh9 [1.678635117s]
Jul 25 10:46:57.573: INFO: Created: latency-svc-knpzv
Jul 25 10:46:57.581: INFO: Got endpoints: latency-svc-knpzv [1.423749593s]
Jul 25 10:46:57.649: INFO: Created: latency-svc-ctjj2
Jul 25 10:46:57.758: INFO: Got endpoints: latency-svc-ctjj2 [1.446233239s]
Jul 25 10:46:57.767: INFO: Created: latency-svc-66zwt
Jul 25 10:46:57.818: INFO: Got endpoints: latency-svc-66zwt [1.463254013s]
Jul 25 10:46:57.895: INFO: Created: latency-svc-mmt24
Jul 25 10:46:57.911: INFO: Got endpoints: latency-svc-mmt24 [1.525694101s]
Jul 25 10:46:57.941: INFO: Created: latency-svc-ff9sd
Jul 25 10:46:57.965: INFO: Got endpoints: latency-svc-ff9sd [1.483135969s]
Jul 25 10:46:58.033: INFO: Created: latency-svc-9f6x8
Jul 25 10:46:58.037: INFO: Got endpoints: latency-svc-9f6x8 [1.52029566s]
Jul 25 10:46:58.093: INFO: Created: latency-svc-lh565
Jul 25 10:46:58.105: INFO: Got endpoints: latency-svc-lh565 [1.506569796s]
Jul 25 10:46:58.127: INFO: Created: latency-svc-wccqj
Jul 25 10:46:58.195: INFO: Got endpoints: latency-svc-wccqj [1.467193819s]
Jul 25 10:46:58.208: INFO: Created: latency-svc-9n6qn
Jul 25 10:46:58.213: INFO: Got endpoints: latency-svc-9n6qn [1.444282862s]
Jul 25 10:46:58.242: INFO: Created: latency-svc-9cmsw
Jul 25 10:46:58.255: INFO: Got endpoints: latency-svc-9cmsw [1.444491796s]
Jul 25 10:46:58.273: INFO: Created: latency-svc-nqgdn
Jul 25 10:46:58.286: INFO: Got endpoints: latency-svc-nqgdn [1.378736495s]
Jul 25 10:46:58.332: INFO: Created: latency-svc-zmhbc
Jul 25 10:46:58.341: INFO: Got endpoints: latency-svc-zmhbc [1.29046164s]
Jul 25 10:46:58.361: INFO: Created: latency-svc-wpf2p
Jul 25 10:46:58.397: INFO: Got endpoints: latency-svc-wpf2p [1.273167142s]
Jul 25 10:46:58.518: INFO: Created: latency-svc-tj5f6
Jul 25 10:46:58.522: INFO: Got endpoints: latency-svc-tj5f6 [1.287925394s]
Jul 25 10:46:58.583: INFO: Created: latency-svc-sqhls
Jul 25 10:46:58.600: INFO: Got endpoints: latency-svc-sqhls [1.02786486s]
Jul 25 10:46:58.644: INFO: Created: latency-svc-k85qm
Jul 25 10:46:58.661: INFO: Got endpoints: latency-svc-k85qm [1.080044552s]
Jul 25 10:46:58.699: INFO: Created: latency-svc-9gwf2
Jul 25 10:46:58.714: INFO: Got endpoints: latency-svc-9gwf2 [956.244839ms]
Jul 25 10:46:58.742: INFO: Created: latency-svc-9dbt8
Jul 25 10:46:58.800: INFO: Got endpoints: latency-svc-9dbt8 [981.503786ms]
Jul 25 10:46:58.823: INFO: Created: latency-svc-2xgkp
Jul 25 10:46:58.840: INFO: Got endpoints: latency-svc-2xgkp [929.692648ms]
Jul 25 10:46:58.860: INFO: Created: latency-svc-ptbhm
Jul 25 10:46:58.889: INFO: Got endpoints: latency-svc-ptbhm [923.150765ms]
Jul 25 10:46:58.937: INFO: Created: latency-svc-xlz9k
Jul 25 10:46:58.949: INFO: Got endpoints: latency-svc-xlz9k [912.620922ms]
Jul 25 10:46:58.991: INFO: Created: latency-svc-lw8jq
Jul 25 10:46:59.003: INFO: Got endpoints: latency-svc-lw8jq [898.315314ms]
Jul 25 10:46:59.021: INFO: Created: latency-svc-xhnlr
Jul 25 10:46:59.033: INFO: Got endpoints: latency-svc-xhnlr [838.535785ms]
Jul 25 10:46:59.082: INFO: Created: latency-svc-hkfl5
Jul 25 10:46:59.088: INFO: Got endpoints: latency-svc-hkfl5 [874.739074ms]
Jul 25 10:46:59.107: INFO: Created: latency-svc-hvgb8
Jul 25 10:46:59.124: INFO: Got endpoints: latency-svc-hvgb8 [868.681142ms]
Jul 25 10:46:59.143: INFO: Created: latency-svc-t58cp
Jul 25 10:46:59.154: INFO: Got endpoints: latency-svc-t58cp [868.542964ms]
Jul 25 10:46:59.175: INFO: Created: latency-svc-ns76m
Jul 25 10:46:59.213: INFO: Got endpoints: latency-svc-ns76m [871.542144ms]
Jul 25 10:46:59.231: INFO: Created: latency-svc-fl86z
Jul 25 10:46:59.244: INFO: Got endpoints: latency-svc-fl86z [847.343048ms]
Jul 25 10:46:59.261: INFO: Created: latency-svc-mw62m
Jul 25 10:46:59.287: INFO: Got endpoints: latency-svc-mw62m [764.609045ms]
Jul 25 10:46:59.351: INFO: Created: latency-svc-hcm8m
Jul 25 10:46:59.355: INFO: Got endpoints: latency-svc-hcm8m [754.70935ms]
Jul 25 10:46:59.414: INFO: Created: latency-svc-sbvd6
Jul 25 10:46:59.421: INFO: Got endpoints: latency-svc-sbvd6 [760.212791ms]
Jul 25 10:46:59.448: INFO: Created: latency-svc-25xxb
Jul 25 10:46:59.476: INFO: Got endpoints: latency-svc-25xxb [761.766986ms]
Jul 25 10:46:59.503: INFO: Created: latency-svc-j6fw8
Jul 25 10:46:59.540: INFO: Got endpoints: latency-svc-j6fw8 [740.076174ms]
Jul 25 10:46:59.561: INFO: Created: latency-svc-sfkpq
Jul 25 10:46:59.602: INFO: Got endpoints: latency-svc-sfkpq [761.583499ms]
Jul 25 10:46:59.621: INFO: Created: latency-svc-pfddt
Jul 25 10:46:59.649: INFO: Got endpoints: latency-svc-pfddt [760.00381ms]
Jul 25 10:46:59.787: INFO: Created: latency-svc-fc2mg
Jul 25 10:46:59.855: INFO: Created: latency-svc-wqm7m
Jul 25 10:46:59.855: INFO: Got endpoints: latency-svc-fc2mg [905.721786ms]
Jul 25 10:46:59.925: INFO: Got endpoints: latency-svc-wqm7m [921.891245ms]
Jul 25 10:46:59.939: INFO: Created: latency-svc-85qc7
Jul 25 10:46:59.964: INFO: Got endpoints: latency-svc-85qc7 [930.763426ms]
Jul 25 10:47:00.013: INFO: Created: latency-svc-q67b4
Jul 25 10:47:00.069: INFO: Got endpoints: latency-svc-q67b4 [981.314678ms]
Jul 25 10:47:00.095: INFO: Created: latency-svc-4dfrd
Jul 25 10:47:00.113: INFO: Got endpoints: latency-svc-4dfrd [989.138004ms]
Jul 25 10:47:00.143: INFO: Created: latency-svc-7wcpc
Jul 25 10:47:00.218: INFO: Created: latency-svc-j6x2w
Jul 25 10:47:00.219: INFO: Got endpoints: latency-svc-7wcpc [1.064153214s]
Jul 25 10:47:00.226: INFO: Got endpoints: latency-svc-j6x2w [1.013495479s]
Jul 25 10:47:00.263: INFO: Created: latency-svc-rhxpt
Jul 25 10:47:00.287: INFO: Got endpoints: latency-svc-rhxpt [1.042521145s]
Jul 25 10:47:00.305: INFO: Created: latency-svc-dsghb
Jul 25 10:47:00.345: INFO: Got endpoints: latency-svc-dsghb [1.057906036s]
Jul 25 10:47:00.366: INFO: Created: latency-svc-rprkz
Jul 25 10:47:00.397: INFO: Got endpoints: latency-svc-rprkz [1.042125895s]
Jul 25 10:47:00.433: INFO: Created: latency-svc-pb7tv
Jul 25 10:47:00.470: INFO: Got endpoints: latency-svc-pb7tv [1.049361537s]
Jul 25 10:47:00.485: INFO: Created: latency-svc-42jkg
Jul 25 10:47:00.510: INFO: Got endpoints: latency-svc-42jkg [1.034617163s]
Jul 25 10:47:00.539: INFO: Created: latency-svc-jh4bh
Jul 25 10:47:00.552: INFO: Got endpoints: latency-svc-jh4bh [1.012257876s]
Jul 25 10:47:00.569: INFO: Created: latency-svc-qz5t4
Jul 25 10:47:00.602: INFO: Got endpoints: latency-svc-qz5t4 [1.000178689s]
Jul 25 10:47:00.612: INFO: Created: latency-svc-hmmsx
Jul 25 10:47:00.631: INFO: Got endpoints: latency-svc-hmmsx [981.914548ms]
Jul 25 10:47:00.679: INFO: Created: latency-svc-99mts
Jul 25 10:47:00.691: INFO: Got endpoints: latency-svc-99mts [835.333502ms]
Jul 25 10:47:00.740: INFO: Created: latency-svc-q5hh6
Jul 25 10:47:00.751: INFO: Got endpoints: latency-svc-q5hh6 [826.340459ms]
Jul 25 10:47:00.785: INFO: Created: latency-svc-7qxqq
Jul 25 10:47:00.799: INFO: Got endpoints: latency-svc-7qxqq [834.998733ms]
Jul 25 10:47:00.799: INFO: Latencies: [254.047133ms 628.78731ms 740.076174ms 754.70935ms 760.00381ms 760.212791ms 761.583499ms 761.766986ms 764.609045ms 776.190551ms 811.081215ms 817.053035ms 826.340459ms 834.998733ms 835.333502ms 838.535785ms 838.789285ms 847.343048ms 848.201178ms 868.542964ms 868.681142ms 871.542144ms 874.739074ms 877.635812ms 894.497611ms 898.315314ms 904.030962ms 905.721786ms 906.966679ms 912.620922ms 916.060993ms 921.891245ms 923.150765ms 929.692648ms 930.763426ms 945.252813ms 954.084452ms 956.244839ms 978.828846ms 979.809337ms 981.314678ms 981.503786ms 981.914548ms 989.138004ms 998.622436ms 1.000178689s 1.008349595s 1.012257876s 1.013495479s 1.013532895s 1.017794475s 1.023383262s 1.024347417s 1.02550644s 1.02786486s 1.03164631s 1.034617163s 1.034941365s 1.042125895s 1.042521145s 1.049361537s 1.050488355s 1.053503609s 1.056849341s 1.057187653s 1.057906036s 1.061538766s 1.064153214s 1.070565046s 1.072701808s 1.076278805s 1.080044552s 1.084244307s 1.089980619s 1.094209807s 1.096165494s 1.098577376s 1.116362294s 1.116724964s 1.129602105s 1.155764483s 1.167793712s 1.198847374s 1.22247858s 1.248030284s 1.267089754s 1.273167142s 1.284367529s 1.287925394s 1.29046164s 1.310000282s 1.310486441s 1.313926403s 1.350513408s 1.363144054s 1.378736495s 1.423749593s 1.440607215s 1.444282862s 1.444491796s 1.446233239s 1.458239843s 1.463254013s 1.467193819s 1.46876341s 1.472216814s 1.47976225s 1.483135969s 1.485177672s 1.49097674s 1.506569796s 1.50812532s 1.515070913s 1.52029566s 1.521074095s 1.524520532s 1.525694101s 1.549117092s 1.551474466s 1.557362005s 1.570872683s 1.578659825s 1.580719591s 1.587280927s 1.60113397s 1.621661789s 1.64875073s 1.657970271s 1.664862565s 1.678635117s 1.682160957s 1.706659174s 1.726055301s 1.729657118s 1.789920916s 1.795061034s 1.805001989s 1.818320986s 1.825365118s 1.847705036s 1.869221015s 1.884351332s 1.89151319s 1.953006711s 1.96354743s 1.963923702s 1.964295061s 1.966165074s 1.974246427s 1.992280549s 2.095563424s 2.125096717s 2.143130385s 2.190914137s 2.210636422s 2.23852099s 2.28175403s 2.287111345s 2.341429855s 2.346913842s 2.365599738s 2.382427672s 2.394975901s 2.412342399s 2.422143062s 2.450878201s 2.468543914s 2.469617329s 2.485893276s 2.491179458s 2.551509398s 2.557485001s 2.558027096s 2.566623735s 2.629160345s 2.635255791s 2.937682527s 3.022723205s 3.053305145s 3.085233811s 3.131958095s 3.136277693s 3.143980422s 3.156685115s 3.170268132s 3.173509701s 3.195885502s 3.197280577s 3.256076105s 3.257046039s 3.278813863s 3.284079755s 3.2996211s 3.336232405s 3.365841235s 3.393802951s 3.457594777s 3.753796882s 3.789372705s 3.796226512s]
Jul 25 10:47:00.800: INFO: 50 %ile: 1.446233239s
Jul 25 10:47:00.800: INFO: 90 %ile: 3.131958095s
Jul 25 10:47:00.800: INFO: 99 %ile: 3.789372705s
Jul 25 10:47:00.800: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:47:00.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9411" for this suite.

• [SLOW TEST:26.454 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":69,"skipped":1491,"failed":0}
S
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:47:00.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:47:00.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6289" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":70,"skipped":1492,"failed":0}
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:47:00.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul 25 10:47:00.960: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:47:11.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5846" for this suite.

• [SLOW TEST:10.263 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":71,"skipped":1498,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:47:11.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:47:11.319: INFO: (0) /api/v1/nodes/kali-worker2/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-6143c8ef-fa5f-4ccd-8f33-56bfd8741515
STEP: Creating a pod to test consume secrets
Jul 25 10:47:11.916: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5" in namespace "projected-2503" to be "Succeeded or Failed"
Jul 25 10:47:11.948: INFO: Pod "pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.96969ms
Jul 25 10:47:14.052: INFO: Pod "pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136041941s
Jul 25 10:47:16.388: INFO: Pod "pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47231265s
Jul 25 10:47:18.575: INFO: Pod "pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.658984947s
STEP: Saw pod success
Jul 25 10:47:18.575: INFO: Pod "pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5" satisfied condition "Succeeded or Failed"
Jul 25 10:47:18.593: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5 container projected-secret-volume-test: 
STEP: delete the pod
Jul 25 10:47:18.838: INFO: Waiting for pod pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5 to disappear
Jul 25 10:47:18.863: INFO: Pod pod-projected-secrets-9118bf29-1330-4577-b744-44be8649d8e5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:47:18.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2503" for this suite.

• [SLOW TEST:7.205 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1517,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:47:18.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Jul 25 10:47:19.001: INFO: Waiting up to 5m0s for pod "var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5" in namespace "var-expansion-4784" to be "Succeeded or Failed"
Jul 25 10:47:19.037: INFO: Pod "var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.579941ms
Jul 25 10:47:21.710: INFO: Pod "var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.709317529s
Jul 25 10:47:23.713: INFO: Pod "var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5": Phase="Running", Reason="", readiness=true. Elapsed: 4.712373126s
Jul 25 10:47:25.717: INFO: Pod "var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.716210307s
STEP: Saw pod success
Jul 25 10:47:25.717: INFO: Pod "var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5" satisfied condition "Succeeded or Failed"
Jul 25 10:47:25.752: INFO: Trying to get logs from node kali-worker2 pod var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5 container dapi-container: 
STEP: delete the pod
Jul 25 10:47:25.828: INFO: Waiting for pod var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5 to disappear
Jul 25 10:47:25.843: INFO: Pod var-expansion-dde7d052-de89-4b63-b042-10639a2c7bf5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:47:25.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4784" for this suite.

• [SLOW TEST:7.026 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1539,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:47:25.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 10:47:26.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b34f94a-725b-4e7b-a5df-825ab5201ed9" in namespace "projected-851" to be "Succeeded or Failed"
Jul 25 10:47:26.209: INFO: Pod "downwardapi-volume-6b34f94a-725b-4e7b-a5df-825ab5201ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 38.270018ms
Jul 25 10:47:28.286: INFO: Pod "downwardapi-volume-6b34f94a-725b-4e7b-a5df-825ab5201ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115027877s
Jul 25 10:47:30.290: INFO: Pod "downwardapi-volume-6b34f94a-725b-4e7b-a5df-825ab5201ed9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119195054s
STEP: Saw pod success
Jul 25 10:47:30.290: INFO: Pod "downwardapi-volume-6b34f94a-725b-4e7b-a5df-825ab5201ed9" satisfied condition "Succeeded or Failed"
Jul 25 10:47:30.609: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6b34f94a-725b-4e7b-a5df-825ab5201ed9 container client-container: 
STEP: delete the pod
Jul 25 10:47:31.374: INFO: Waiting for pod downwardapi-volume-6b34f94a-725b-4e7b-a5df-825ab5201ed9 to disappear
Jul 25 10:47:31.519: INFO: Pod downwardapi-volume-6b34f94a-725b-4e7b-a5df-825ab5201ed9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:47:31.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-851" for this suite.

• [SLOW TEST:5.973 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1547,"failed":0}
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:47:31.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0725 10:47:44.816126       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 25 10:47:44.816: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:47:44.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3995" for this suite.

• [SLOW TEST:12.951 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":76,"skipped":1547,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:47:44.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:47:45.323: INFO: Creating deployment "webserver-deployment"
Jul 25 10:47:45.385: INFO: Waiting for observed generation 1
Jul 25 10:47:47.747: INFO: Waiting for all required pods to come up
Jul 25 10:47:47.802: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul 25 10:48:00.386: INFO: Waiting for deployment "webserver-deployment" to complete
Jul 25 10:48:00.480: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jul 25 10:48:00.596: INFO: Updating deployment webserver-deployment
Jul 25 10:48:00.596: INFO: Waiting for observed generation 2
Jul 25 10:48:02.920: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul 25 10:48:02.990: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul 25 10:48:03.184: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul 25 10:48:03.776: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul 25 10:48:03.776: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul 25 10:48:03.938: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul 25 10:48:04.207: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jul 25 10:48:04.207: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jul 25 10:48:04.447: INFO: Updating deployment webserver-deployment
Jul 25 10:48:04.448: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jul 25 10:48:04.988: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul 25 10:48:08.276: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 25 10:48:09.813: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-2042 /apis/apps/v1/namespaces/deployment-2042/deployments/webserver-deployment 3bdeced3-28ef-4e75-9138-c3bdde175506 4022470 3 2020-07-25 10:47:45 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-25 10:48:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00478af28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-25 10:48:04 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-07-25 10:48:05 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jul 25 10:48:10.418: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-2042 /apis/apps/v1/namespaces/deployment-2042/replicasets/webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 4022467 3 2020-07-25 10:48:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3bdeced3-28ef-4e75-9138-c3bdde175506 0xc00478b5a7 0xc00478b5a8}] []  [{kube-controller-manager Update apps/v1 2020-07-25 10:48:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 100 101 99 101 100 51 45 50 56 101 102 45 52 101 55 53 45 57 49 51 56 45 99 51 98 100 100 101 49 55 53 53 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00478b628  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 25 10:48:10.419: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jul 25 10:48:10.419: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-2042 /apis/apps/v1/namespaces/deployment-2042/replicasets/webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 4022458 3 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3bdeced3-28ef-4e75-9138-c3bdde175506 0xc00478b687 0xc00478b688}] []  [{kube-controller-manager Update apps/v1 2020-07-25 10:48:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 98 100 101 99 101 100 51 45 50 56 101 102 45 52 101 55 53 45 57 49 51 56 45 99 51 98 100 100 101 49 55 53 53 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00478b718  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jul 25 10:48:10.659: INFO: Pod "webserver-deployment-6676bcd6d4-5spt8" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5spt8 webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-5spt8 80fc5601-b9ca-4f89-8414-a9739785f465 4022522 0 2020-07-25 10:48:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc00478bd27 0xc00478bd28}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.45,StartTime:2020-07-25 10:48:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.659: INFO: Pod "webserver-deployment-6676bcd6d4-6j5z9" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6j5z9 webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-6j5z9 304a48a8-3a0f-40c2-b0fd-9b947f5626d7 4022461 0 2020-07-25 10:48:05 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc00478bf07 0xc00478bf08}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.660: INFO: Pod "webserver-deployment-6676bcd6d4-d75qn" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-d75qn webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-d75qn ab616776-2e35-4bb7-a7ea-567b41892507 4022367 0 2020-07-25 10:48:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc002440047 0xc002440048}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:01 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-25 10:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.660: INFO: Pod "webserver-deployment-6676bcd6d4-djsdx" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-djsdx webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-djsdx 08eb4434-068b-4d95-adc5-f0e8e3e4fc5c 4022346 0 2020-07-25 10:48:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc002440397 0xc002440398}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 10:48:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.660: INFO: Pod "webserver-deployment-6676bcd6d4-dvvxz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dvvxz webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-dvvxz 3ee3c48b-03ae-47ed-a623-94b9452a2f68 4022450 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc002440607 0xc002440608}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.661: INFO: Pod "webserver-deployment-6676bcd6d4-hrltx" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hrltx webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-hrltx 35ef835c-3279-406d-bce0-1d0703b60d53 4022475 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc0024408a7 0xc0024408a8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.661: INFO: Pod "webserver-deployment-6676bcd6d4-jxzj9" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jxzj9 webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-jxzj9 fb84610c-cf65-47bb-b5d0-b5db60c927df 4022371 0 2020-07-25 10:48:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc002440c07 0xc002440c08}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:01 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 10:48:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.661: INFO: Pod "webserver-deployment-6676bcd6d4-mkfnz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mkfnz webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-mkfnz fbbd3f36-6f8c-45b1-997a-e8d861c33e13 4022531 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc002440ec7 0xc002440ec8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.662: INFO: Pod "webserver-deployment-6676bcd6d4-mxq26" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mxq26 webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-mxq26 b974a824-0a87-4425-9612-59ebfe1e5ab6 4022490 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc002441177 0xc002441178}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.662: INFO: Pod "webserver-deployment-6676bcd6d4-pd92x" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pd92x webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-pd92x fa27e277-837e-46a4-ad33-de80fa6340eb 4022452 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc0024417e7 0xc0024417e8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.662: INFO: Pod "webserver-deployment-6676bcd6d4-pv8nz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pv8nz webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-pv8nz da6627a5-4659-4666-856c-7a84da6f9918 4022449 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc002441af7 0xc002441af8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.662: INFO: Pod "webserver-deployment-6676bcd6d4-s67gc" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-s67gc webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-s67gc 2b1f029b-1ee5-4ebd-ad14-2f96d46e9783 4022506 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc002441e47 0xc002441e48}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.663: INFO: Pod "webserver-deployment-6676bcd6d4-s6vx9" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-s6vx9 webserver-deployment-6676bcd6d4- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-6676bcd6d4-s6vx9 569f3efe-9919-4411-9e7d-343c507e67a3 4022356 0 2020-07-25 10:48:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 270e4ced-dbd5-4dae-8b50-393dfb279969 0xc003070207 0xc003070208}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 48 101 52 99 101 100 45 100 98 100 53 45 52 100 97 101 45 56 98 53 48 45 51 57 51 100 102 98 50 55 57 57 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-25 10:48:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.663: INFO: Pod "webserver-deployment-84855cf797-5xf5z" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-5xf5z webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-5xf5z 06193bde-f547-4516-937c-35d3e221c342 4022454 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc0030703d7 0xc0030703d8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.663: INFO: Pod "webserver-deployment-84855cf797-79whf" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-79whf webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-79whf 461cf621-1d8f-4495-b2cb-8effad96a46e 4022204 0 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc003070627 0xc003070628}] []  [{kube-controller-manager Update v1 2020-07-25 10:47:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:47:56 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.184,StartTime:2020-07-25 10:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:47:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://734c65fcc84440ffee7b7b2f39900d613a4034e1c9f0bcc5a0259672bdbd45cb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.663: INFO: Pod "webserver-deployment-84855cf797-ckmj9" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ckmj9 webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-ckmj9 00d6e27c-1cba-4faa-8ef8-348f050c7996 4022488 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc003070a67 0xc003070a68}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.664: INFO: Pod "webserver-deployment-84855cf797-cndj2" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cndj2 webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-cndj2 d6e66f3f-fd84-47a5-831a-0802a4f14b45 4022194 0 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc003071027 0xc003071028}] []  [{kube-controller-manager Update v1 2020-07-25 10:47:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:47:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.182,StartTime:2020-07-25 10:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:47:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc8f4eed23507a158c0196bd57d8ac77072b80012456a2dedbf65796b75b3d89,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.664: INFO: Pod "webserver-deployment-84855cf797-dhmgm" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-dhmgm webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-dhmgm 35878970-2bab-4b6f-a059-77bf7e780452 4022519 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc0030714d7 0xc0030714d8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.664: INFO: Pod "webserver-deployment-84855cf797-g45fn" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-g45fn webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-g45fn 38e65730-b7d8-405e-85a5-4c0d0697ef69 4022217 0 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc003071927 0xc003071928}] []  [{kube-controller-manager Update v1 2020-07-25 10:47:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:47:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.183,StartTime:2020-07-25 10:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:47:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e29a82154972679b3e5bd498451e1b3e2d7a5a9892255de5bfbbdcf892b16ace,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.664: INFO: Pod "webserver-deployment-84855cf797-gfjmc" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-gfjmc webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-gfjmc 4976b77b-1519-4469-9b9a-6683903b0d90 4022456 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc003071c07 0xc003071c08}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.665: INFO: Pod "webserver-deployment-84855cf797-jpjsb" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jpjsb webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-jpjsb ddd76a47-bd75-4fee-912b-c95a2b468441 4022219 0 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc003071dc7 0xc003071dc8}] []  [{kube-controller-manager Update v1 2020-07-25 10:47:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:47:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.43,StartTime:2020-07-25 10:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:47:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6b87a7401302711022eceffe356531ab798b39a4295466c06713a30b8c1cc61f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.665: INFO: Pod "webserver-deployment-84855cf797-jqs99" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jqs99 webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-jqs99 beb252e3-a817-41e1-955a-e9b7b7c4e77a 4022468 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc002656147 0xc002656148}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.665: INFO: Pod "webserver-deployment-84855cf797-kcx4r" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kcx4r webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-kcx4r db8a9fcb-0f72-4529-afbe-00a3dc52b011 4022460 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc002656307 0xc002656308}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 10:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.665: INFO: Pod "webserver-deployment-84855cf797-lzng4" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-lzng4 webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-lzng4 8b61f85c-c59e-49de-ba71-9a8afaa2948d 4022479 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc002656617 0xc002656618}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.666: INFO: Pod "webserver-deployment-84855cf797-mswwm" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mswwm webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-mswwm 7e44ab0c-a73c-42b7-8249-a9f970ee6520 4022455 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc0026567c7 0xc0026567c8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.666: INFO: Pod "webserver-deployment-84855cf797-qdv4m" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qdv4m webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-qdv4m 5bf3ef97-f29a-4acc-a9a6-755446fb6c3c 4022499 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc0026568f7 0xc0026568f8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.666: INFO: Pod "webserver-deployment-84855cf797-qvnrv" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qvnrv webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-qvnrv 56e72647-a642-474a-8d5b-8a3860199fe2 4022142 0 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc002656a87 0xc002656a88}] []  [{kube-controller-manager Update v1 2020-07-25 10:47:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:47:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 56 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.181,StartTime:2020-07-25 10:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:47:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://536e90a694d02d0177ab45c99a4e1649bd43b381a4fd30fcafcacf16edde7ab3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.666: INFO: Pod "webserver-deployment-84855cf797-v5ft5" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-v5ft5 webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-v5ft5 c58e7d45-db48-4f79-a783-e304b7c2f1d3 4022457 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc002656e97 0xc002656e98}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.667: INFO: Pod "webserver-deployment-84855cf797-vfwbb" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vfwbb webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-vfwbb d0459221-8bd4-4fe1-b096-bedd0f6256d0 4022181 0 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc002657097 0xc002657098}] []  [{kube-controller-manager Update v1 2020-07-25 10:47:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:47:54 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.41,StartTime:2020-07-25 10:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:47:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c391a29343d71a710209d2f3b8b6f1069c14038d3cf918819469762ac80e2707,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.667: INFO: Pod "webserver-deployment-84855cf797-vrmm4" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vrmm4 webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-vrmm4 6bd66922-5018-4612-b565-26c04b681dc2 4022453 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc0026575e7 0xc0026575e8}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.668: INFO: Pod "webserver-deployment-84855cf797-vwph8" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vwph8 webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-vwph8 5a0c8e90-d0e0-4435-9c18-cc9088fcd138 4022527 0 2020-07-25 10:48:04 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc002657c97 0xc002657c98}] []  [{kube-controller-manager Update v1 2020-07-25 10:48:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:48:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 10:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.668: INFO: Pod "webserver-deployment-84855cf797-vxqwl" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vxqwl webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-vxqwl a89a1715-4dbb-4dc8-8a66-ab4a9ebfc757 4022233 0 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc002657e37 0xc002657e38}] []  [{kube-controller-manager Update v1 2020-07-25 10:47:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:47:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.44,StartTime:2020-07-25 10:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:47:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://629f328e33bc12d68bf0ad078f0dce8a6fca207d5a62e760b88d770ff5f216e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 25 10:48:10.668: INFO: Pod "webserver-deployment-84855cf797-z8xbw" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-z8xbw webserver-deployment-84855cf797- deployment-2042 /api/v1/namespaces/deployment-2042/pods/webserver-deployment-84855cf797-z8xbw 3e103eb7-2994-4a5b-b2c7-9312cb624cc1 4022208 0 2020-07-25 10:47:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a9f1ddf3-f0af-4983-aa1b-5dacc3531395 0xc0024f8037 0xc0024f8038}] []  [{kube-controller-manager Update v1 2020-07-25 10:47:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 102 49 100 100 102 51 45 102 48 97 102 45 52 57 56 51 45 97 97 49 98 45 53 100 97 99 99 51 53 51 49 51 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 10:47:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrngj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrngj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 10:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.42,StartTime:2020-07-25 10:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 10:47:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6473e85227c13921ef098b86da75354eac8f8c1bdd4c81e8ac76c61754bffc83,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:48:10.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2042" for this suite.

• [SLOW TEST:26.567 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":77,"skipped":1549,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:48:11.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Jul 25 10:48:27.967: INFO: Pod pod-hostip-9430eba9-a35e-4631-a938-0ea150c0a1d6 has hostIP: 172.18.0.13
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:48:27.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9458" for this suite.

• [SLOW TEST:16.684 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1566,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:48:28.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 10:48:29.144: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 10:48:32.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270909, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270909, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270909, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731270909, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 10:48:35.426: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jul 25 10:48:42.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config attach --namespace=webhook-9235 to-be-attached-pod -i -c=container1'
Jul 25 10:48:42.352: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:48:42.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9235" for this suite.
STEP: Destroying namespace "webhook-9235-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.463 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":79,"skipped":1575,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:48:44.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:48:52.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5211" for this suite.

• [SLOW TEST:8.653 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":80,"skipped":1575,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:48:53.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:48:54.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config version'
Jul 25 10:48:55.003: INFO: stderr: ""
Jul 25 10:48:55.003: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.5\", GitCommit:\"e6503f8d8f769ace2f338794c914a96fc335df0f\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:53:46Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"c96aede7b5205121079932896c4ad89bb93260af\", GitTreeState:\"clean\", BuildDate:\"2020-06-20T01:49:49Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:48:55.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6563" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":81,"skipped":1612,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:48:55.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-wkm6
STEP: Creating a pod to test atomic-volume-subpath
Jul 25 10:48:57.156: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wkm6" in namespace "subpath-5460" to be "Succeeded or Failed"
Jul 25 10:48:57.509: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 352.369178ms
Jul 25 10:48:59.765: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608842734s
Jul 25 10:49:01.931: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.77432987s
Jul 25 10:49:04.167: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.010137466s
Jul 25 10:49:06.179: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 9.02256073s
Jul 25 10:49:08.183: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 11.026303313s
Jul 25 10:49:10.203: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 13.046289789s
Jul 25 10:49:12.232: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 15.075195756s
Jul 25 10:49:14.235: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 17.078618662s
Jul 25 10:49:16.410: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 19.253310243s
Jul 25 10:49:18.414: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 21.257897652s
Jul 25 10:49:20.419: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 23.262820717s
Jul 25 10:49:22.424: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Running", Reason="", readiness=true. Elapsed: 25.266998574s
Jul 25 10:49:24.467: INFO: Pod "pod-subpath-test-downwardapi-wkm6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.310773171s
STEP: Saw pod success
Jul 25 10:49:24.467: INFO: Pod "pod-subpath-test-downwardapi-wkm6" satisfied condition "Succeeded or Failed"
Jul 25 10:49:24.471: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-wkm6 container test-container-subpath-downwardapi-wkm6: 
STEP: delete the pod
Jul 25 10:49:24.594: INFO: Waiting for pod pod-subpath-test-downwardapi-wkm6 to disappear
Jul 25 10:49:24.927: INFO: Pod pod-subpath-test-downwardapi-wkm6 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-wkm6
Jul 25 10:49:24.927: INFO: Deleting pod "pod-subpath-test-downwardapi-wkm6" in namespace "subpath-5460"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:49:24.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5460" for this suite.

• [SLOW TEST:29.118 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":82,"skipped":1621,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:49:24.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Jul 25 10:49:25.271: INFO: Waiting up to 5m0s for pod "var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad" in namespace "var-expansion-3407" to be "Succeeded or Failed"
Jul 25 10:49:25.424: INFO: Pod "var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad": Phase="Pending", Reason="", readiness=false. Elapsed: 152.697659ms
Jul 25 10:49:27.646: INFO: Pod "var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374619929s
Jul 25 10:49:29.650: INFO: Pod "var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad": Phase="Running", Reason="", readiness=true. Elapsed: 4.378883745s
Jul 25 10:49:31.655: INFO: Pod "var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.383802538s
STEP: Saw pod success
Jul 25 10:49:31.655: INFO: Pod "var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad" satisfied condition "Succeeded or Failed"
Jul 25 10:49:31.658: INFO: Trying to get logs from node kali-worker pod var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad container dapi-container: 
STEP: delete the pod
Jul 25 10:49:31.755: INFO: Waiting for pod var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad to disappear
Jul 25 10:49:31.775: INFO: Pod var-expansion-38c352e7-1fe3-4726-9bf5-24274f7e62ad no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:49:31.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3407" for this suite.

• [SLOW TEST:6.871 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1633,"failed":0}
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:49:31.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 25 10:49:31.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3095'
Jul 25 10:49:31.994: INFO: stderr: ""
Jul 25 10:49:31.994: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
Jul 25 10:49:32.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3095'
Jul 25 10:49:43.356: INFO: stderr: ""
Jul 25 10:49:43.356: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:49:43.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3095" for this suite.

• [SLOW TEST:11.603 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":84,"skipped":1633,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:49:43.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:50:43.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7907" for this suite.

• [SLOW TEST:60.202 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1667,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:50:43.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:50:51.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7859" for this suite.
STEP: Destroying namespace "nsdeletetest-2844" for this suite.
Jul 25 10:50:51.328: INFO: Namespace nsdeletetest-2844 was already deleted
STEP: Destroying namespace "nsdeletetest-3849" for this suite.

• [SLOW TEST:7.716 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":86,"skipped":1676,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:50:51.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 10:50:52.692: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 10:50:55.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271052, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271052, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271052, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271052, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:50:57.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271052, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271052, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271052, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271052, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 10:51:00.117: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:51:00.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:51:01.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2116" for this suite.
STEP: Destroying namespace "webhook-2116-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.267 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":87,"skipped":1688,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:51:01.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:51:01.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-793" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":88,"skipped":1692,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:51:01.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 10:51:02.326: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9" in namespace "downward-api-6021" to be "Succeeded or Failed"
Jul 25 10:51:02.425: INFO: Pod "downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9": Phase="Pending", Reason="", readiness=false. Elapsed: 99.048213ms
Jul 25 10:51:04.429: INFO: Pod "downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103224881s
Jul 25 10:51:06.449: INFO: Pod "downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9": Phase="Running", Reason="", readiness=true. Elapsed: 4.122870425s
Jul 25 10:51:08.453: INFO: Pod "downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127296867s
STEP: Saw pod success
Jul 25 10:51:08.453: INFO: Pod "downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9" satisfied condition "Succeeded or Failed"
Jul 25 10:51:08.456: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9 container client-container: 
STEP: delete the pod
Jul 25 10:51:08.539: INFO: Waiting for pod downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9 to disappear
Jul 25 10:51:08.547: INFO: Pod downwardapi-volume-aea787f5-4054-4c3d-ba30-151820ee17a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:51:08.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6021" for this suite.

• [SLOW TEST:6.597 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1696,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:51:08.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Jul 25 10:51:08.678: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-428" to be "Succeeded or Failed"
Jul 25 10:51:08.703: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.144943ms
Jul 25 10:51:10.706: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028482212s
Jul 25 10:51:12.711: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032876103s
Jul 25 10:51:14.743: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06507341s
STEP: Saw pod success
Jul 25 10:51:14.743: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul 25 10:51:14.746: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul 25 10:51:15.109: INFO: Waiting for pod pod-host-path-test to disappear
Jul 25 10:51:15.174: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:51:15.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-428" for this suite.

• [SLOW TEST:6.615 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1727,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:51:15.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-53a061d4-ed68-44d3-82da-0f00e3ac6e53
STEP: Creating a pod to test consume secrets
Jul 25 10:51:15.324: INFO: Waiting up to 5m0s for pod "pod-secrets-56b77743-4120-4d7b-ab40-08b320c76517" in namespace "secrets-3061" to be "Succeeded or Failed"
Jul 25 10:51:15.329: INFO: Pod "pod-secrets-56b77743-4120-4d7b-ab40-08b320c76517": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280077ms
Jul 25 10:51:17.333: INFO: Pod "pod-secrets-56b77743-4120-4d7b-ab40-08b320c76517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008310472s
Jul 25 10:51:19.485: INFO: Pod "pod-secrets-56b77743-4120-4d7b-ab40-08b320c76517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160675952s
STEP: Saw pod success
Jul 25 10:51:19.485: INFO: Pod "pod-secrets-56b77743-4120-4d7b-ab40-08b320c76517" satisfied condition "Succeeded or Failed"
Jul 25 10:51:19.488: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-56b77743-4120-4d7b-ab40-08b320c76517 container secret-volume-test: 
STEP: delete the pod
Jul 25 10:51:19.529: INFO: Waiting for pod pod-secrets-56b77743-4120-4d7b-ab40-08b320c76517 to disappear
Jul 25 10:51:19.545: INFO: Pod pod-secrets-56b77743-4120-4d7b-ab40-08b320c76517 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:51:19.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3061" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1759,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:51:19.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 25 10:51:19.863: INFO: Waiting up to 5m0s for pod "pod-88aa380e-6606-45d4-9093-da84d2706619" in namespace "emptydir-1428" to be "Succeeded or Failed"
Jul 25 10:51:19.915: INFO: Pod "pod-88aa380e-6606-45d4-9093-da84d2706619": Phase="Pending", Reason="", readiness=false. Elapsed: 52.152363ms
Jul 25 10:51:21.920: INFO: Pod "pod-88aa380e-6606-45d4-9093-da84d2706619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056807312s
Jul 25 10:51:23.924: INFO: Pod "pod-88aa380e-6606-45d4-9093-da84d2706619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061332587s
STEP: Saw pod success
Jul 25 10:51:23.924: INFO: Pod "pod-88aa380e-6606-45d4-9093-da84d2706619" satisfied condition "Succeeded or Failed"
Jul 25 10:51:23.927: INFO: Trying to get logs from node kali-worker pod pod-88aa380e-6606-45d4-9093-da84d2706619 container test-container: 
STEP: delete the pod
Jul 25 10:51:24.013: INFO: Waiting for pod pod-88aa380e-6606-45d4-9093-da84d2706619 to disappear
Jul 25 10:51:24.021: INFO: Pod pod-88aa380e-6606-45d4-9093-da84d2706619 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:51:24.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1428" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1782,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:51:24.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:51:24.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:51:28.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2722" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1808,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:51:28.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Jul 25 10:51:28.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-2847 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jul 25 10:51:28.432: INFO: stderr: ""
Jul 25 10:51:28.432: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Jul 25 10:51:28.432: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jul 25 10:51:28.432: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2847" to be "running and ready, or succeeded"
Jul 25 10:51:28.437: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.879867ms
Jul 25 10:51:30.441: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008905114s
Jul 25 10:51:32.445: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.013550374s
Jul 25 10:51:32.446: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jul 25 10:51:32.446: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jul 25 10:51:32.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2847'
Jul 25 10:51:32.556: INFO: stderr: ""
Jul 25 10:51:32.556: INFO: stdout: "I0725 10:51:30.888809       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/7m4 594\nI0725 10:51:31.089025       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/vpv 335\nI0725 10:51:31.288955       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/j8g 299\nI0725 10:51:31.488960       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/jll 471\nI0725 10:51:31.688968       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/fs2 209\nI0725 10:51:31.888994       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/dr2 260\nI0725 10:51:32.088935       1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/jh4d 365\nI0725 10:51:32.288959       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/k4tg 423\nI0725 10:51:32.488932       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/7brm 598\n"
STEP: limiting log lines
Jul 25 10:51:32.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2847 --tail=1'
Jul 25 10:51:32.661: INFO: stderr: ""
Jul 25 10:51:32.661: INFO: stdout: "I0725 10:51:32.488932       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/7brm 598\n"
Jul 25 10:51:32.661: INFO: got output "I0725 10:51:32.488932       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/7brm 598\n"
STEP: limiting log bytes
Jul 25 10:51:32.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2847 --limit-bytes=1'
Jul 25 10:51:32.830: INFO: stderr: ""
Jul 25 10:51:32.830: INFO: stdout: "I"
Jul 25 10:51:32.830: INFO: got output "I"
STEP: exposing timestamps
Jul 25 10:51:32.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2847 --tail=1 --timestamps'
Jul 25 10:51:33.026: INFO: stderr: ""
Jul 25 10:51:33.026: INFO: stdout: "2020-07-25T10:51:32.889071721Z I0725 10:51:32.888930       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/rgq4 407\n"
Jul 25 10:51:33.026: INFO: got output "2020-07-25T10:51:32.889071721Z I0725 10:51:32.888930       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/rgq4 407\n"
STEP: restricting to a time range
Jul 25 10:51:35.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2847 --since=1s'
Jul 25 10:51:35.638: INFO: stderr: ""
Jul 25 10:51:35.638: INFO: stdout: "I0725 10:51:34.688971       1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ssj 590\nI0725 10:51:34.888921       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/gc77 267\nI0725 10:51:35.089023       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/5fj 448\nI0725 10:51:35.288990       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/9hz 366\nI0725 10:51:35.488947       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/949n 542\n"
Jul 25 10:51:35.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2847 --since=24h'
Jul 25 10:51:35.752: INFO: stderr: ""
Jul 25 10:51:35.752: INFO: stdout: "I0725 10:51:30.888809       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/7m4 594\nI0725 10:51:31.089025       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/vpv 335\nI0725 10:51:31.288955       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/j8g 299\nI0725 10:51:31.488960       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/jll 471\nI0725 10:51:31.688968       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/fs2 209\nI0725 10:51:31.888994       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/dr2 260\nI0725 10:51:32.088935       1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/jh4d 365\nI0725 10:51:32.288959       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/k4tg 423\nI0725 10:51:32.488932       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/7brm 598\nI0725 10:51:32.689090       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/lpdl 233\nI0725 10:51:32.888930       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/rgq4 407\nI0725 10:51:33.089027       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/27b 416\nI0725 10:51:33.288973       1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/qcb 553\nI0725 10:51:33.488959       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/c2j 554\nI0725 10:51:33.688988       1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/kkp 422\nI0725 10:51:33.888969       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/ldzs 463\nI0725 10:51:34.088941       1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/drjj 554\nI0725 10:51:34.288919       1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/pn5z 521\nI0725 10:51:34.488918       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/nxhn 390\nI0725 10:51:34.688971       1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ssj 590\nI0725 10:51:34.888921       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/gc77 267\nI0725 10:51:35.089023       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/5fj 448\nI0725 10:51:35.288990       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/9hz 366\nI0725 10:51:35.488947       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/949n 542\nI0725 10:51:35.688971       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/z2p 435\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Jul 25 10:51:35.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2847'
Jul 25 10:51:43.299: INFO: stderr: ""
Jul 25 10:51:43.299: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:51:43.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2847" for this suite.

• [SLOW TEST:15.161 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":94,"skipped":1810,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:51:43.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul 25 10:51:43.371: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 25 10:51:43.403: INFO: Waiting for terminating namespaces to be deleted...
Jul 25 10:51:43.406: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul 25 10:51:43.411: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Jul 25 10:51:43.411: INFO: 	Container kindnet-cni ready: true, restart count 1
Jul 25 10:51:43.411: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Jul 25 10:51:43.411: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 10:51:43.411: INFO: pod-logs-websocket-df512ec0-1606-413e-965a-78151ec0e09c from pods-2722 started at 2020-07-25 10:51:24 +0000 UTC (1 container statuses recorded)
Jul 25 10:51:43.411: INFO: 	Container main ready: true, restart count 0
Jul 25 10:51:43.411: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul 25 10:51:43.416: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 10:51:43.416: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 10:51:43.416: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 10:51:43.416: INFO: 	Container kindnet-cni ready: true, restart count 1
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-000e548b-301c-4276-b84a-151d99552c53 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-000e548b-301c-4276-b84a-151d99552c53 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-000e548b-301c-4276-b84a-151d99552c53
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:52:01.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3347" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:18.558 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":95,"skipped":1845,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:52:01.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 25 10:52:01.966: INFO: Waiting up to 5m0s for pod "pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3" in namespace "emptydir-5883" to be "Succeeded or Failed"
Jul 25 10:52:01.997: INFO: Pod "pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.844155ms
Jul 25 10:52:04.001: INFO: Pod "pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03483177s
Jul 25 10:52:06.005: INFO: Pod "pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038795064s
Jul 25 10:52:08.009: INFO: Pod "pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042329649s
STEP: Saw pod success
Jul 25 10:52:08.009: INFO: Pod "pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3" satisfied condition "Succeeded or Failed"
Jul 25 10:52:08.011: INFO: Trying to get logs from node kali-worker2 pod pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3 container test-container: 
STEP: delete the pod
Jul 25 10:52:08.079: INFO: Waiting for pod pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3 to disappear
Jul 25 10:52:08.359: INFO: Pod pod-aa96c434-e405-46ba-b6d5-50c64d8b3ec3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:52:08.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5883" for this suite.

• [SLOW TEST:6.500 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1873,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:52:08.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jul 25 10:52:08.779: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:52:24.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3954" for this suite.

• [SLOW TEST:15.699 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1879,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:52:24.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 10:52:24.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jul 25 10:52:27.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5431 create -f -'
Jul 25 10:52:31.447: INFO: stderr: ""
Jul 25 10:52:31.447: INFO: stdout: "e2e-test-crd-publish-openapi-8468-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul 25 10:52:31.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5431 delete e2e-test-crd-publish-openapi-8468-crds test-foo'
Jul 25 10:52:31.574: INFO: stderr: ""
Jul 25 10:52:31.574: INFO: stdout: "e2e-test-crd-publish-openapi-8468-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jul 25 10:52:31.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5431 apply -f -'
Jul 25 10:52:31.827: INFO: stderr: ""
Jul 25 10:52:31.827: INFO: stdout: "e2e-test-crd-publish-openapi-8468-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul 25 10:52:31.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5431 delete e2e-test-crd-publish-openapi-8468-crds test-foo'
Jul 25 10:52:31.953: INFO: stderr: ""
Jul 25 10:52:31.953: INFO: stdout: "e2e-test-crd-publish-openapi-8468-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jul 25 10:52:31.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5431 create -f -'
Jul 25 10:52:32.195: INFO: rc: 1
Jul 25 10:52:32.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5431 apply -f -'
Jul 25 10:52:32.519: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jul 25 10:52:32.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5431 create -f -'
Jul 25 10:52:32.763: INFO: rc: 1
Jul 25 10:52:32.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5431 apply -f -'
Jul 25 10:52:32.996: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jul 25 10:52:32.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8468-crds'
Jul 25 10:52:33.244: INFO: stderr: ""
Jul 25 10:52:33.244: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8468-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jul 25 10:52:33.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8468-crds.metadata'
Jul 25 10:52:33.512: INFO: stderr: ""
Jul 25 10:52:33.512: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8468-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jul 25 10:52:33.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8468-crds.spec'
Jul 25 10:52:33.757: INFO: stderr: ""
Jul 25 10:52:33.758: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8468-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul 25 10:52:33.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8468-crds.spec.bars'
Jul 25 10:52:33.999: INFO: stderr: ""
Jul 25 10:52:33.999: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8468-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul 25 10:52:33.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8468-crds.spec.bars2'
Jul 25 10:52:34.272: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:52:37.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5431" for this suite.

• [SLOW TEST:13.143 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":98,"skipped":1883,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:52:37.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul 25 10:52:37.345: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 25 10:52:37.377: INFO: Waiting for terminating namespaces to be deleted...
Jul 25 10:52:37.380: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul 25 10:52:37.385: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Jul 25 10:52:37.385: INFO: 	Container kindnet-cni ready: true, restart count 1
Jul 25 10:52:37.385: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Jul 25 10:52:37.385: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 10:52:37.385: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul 25 10:52:37.394: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 10:52:37.394: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 10:52:37.394: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 10:52:37.394: INFO: 	Container kindnet-cni ready: true, restart count 1
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-364e5622-9174-4322-8654-bbd21f4f156d 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-364e5622-9174-4322-8654-bbd21f4f156d off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-364e5622-9174-4322-8654-bbd21f4f156d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:57:45.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3444" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:308.777 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":99,"skipped":1891,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:57:45.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 10:57:46.799: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 10:57:48.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271466, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271466, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271466, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271466, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 10:57:51.897: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:57:52.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6987" for this suite.
STEP: Destroying namespace "webhook-6987-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.262 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":100,"skipped":1894,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:57:52.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 10:57:53.147: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 10:57:55.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271473, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271473, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271473, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271473, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:57:57.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271473, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271473, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271473, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271473, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 10:58:00.231: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:58:00.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8147" for this suite.
STEP: Destroying namespace "webhook-8147-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.292 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":101,"skipped":1956,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:58:00.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 25 10:58:01.735: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:01.737: INFO: Number of nodes with available pods: 0
Jul 25 10:58:01.737: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:02.742: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:02.745: INFO: Number of nodes with available pods: 0
Jul 25 10:58:02.745: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:03.743: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:03.746: INFO: Number of nodes with available pods: 0
Jul 25 10:58:03.746: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:04.742: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:04.746: INFO: Number of nodes with available pods: 0
Jul 25 10:58:04.747: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:05.790: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:05.793: INFO: Number of nodes with available pods: 0
Jul 25 10:58:05.793: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:07.052: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:07.056: INFO: Number of nodes with available pods: 1
Jul 25 10:58:07.056: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 10:58:07.744: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:07.747: INFO: Number of nodes with available pods: 2
Jul 25 10:58:07.747: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul 25 10:58:07.914: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:08.000: INFO: Number of nodes with available pods: 1
Jul 25 10:58:08.000: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:09.010: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:09.014: INFO: Number of nodes with available pods: 1
Jul 25 10:58:09.014: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:10.400: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:10.404: INFO: Number of nodes with available pods: 1
Jul 25 10:58:10.404: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:11.006: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:11.009: INFO: Number of nodes with available pods: 1
Jul 25 10:58:11.010: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:12.160: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:12.164: INFO: Number of nodes with available pods: 1
Jul 25 10:58:12.164: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:13.006: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:13.010: INFO: Number of nodes with available pods: 1
Jul 25 10:58:13.010: INFO: Node kali-worker is running more than one daemon pod
Jul 25 10:58:14.005: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 10:58:14.009: INFO: Number of nodes with available pods: 2
Jul 25 10:58:14.009: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9474, will wait for the garbage collector to delete the pods
Jul 25 10:58:14.073: INFO: Deleting DaemonSet.extensions daemon-set took: 7.03939ms
Jul 25 10:58:14.174: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.280287ms
Jul 25 10:58:23.377: INFO: Number of nodes with available pods: 0
Jul 25 10:58:23.377: INFO: Number of running nodes: 0, number of available pods: 0
Jul 25 10:58:23.380: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9474/daemonsets","resourceVersion":"4025814"},"items":null}

Jul 25 10:58:23.383: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9474/pods","resourceVersion":"4025814"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:58:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9474" for this suite.

• [SLOW TEST:22.857 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":102,"skipped":1997,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:58:23.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 10:58:24.002: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 10:58:26.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271504, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271504, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271504, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271503, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 10:58:28.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271504, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271504, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271504, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731271503, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 10:58:31.040: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:58:31.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4638" for this suite.
STEP: Destroying namespace "webhook-4638-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.425 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":103,"skipped":1999,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:58:31.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jul 25 10:58:39.051: INFO: Successfully updated pod "adopt-release-625rp"
STEP: Checking that the Job readopts the Pod
Jul 25 10:58:39.051: INFO: Waiting up to 15m0s for pod "adopt-release-625rp" in namespace "job-4529" to be "adopted"
Jul 25 10:58:39.071: INFO: Pod "adopt-release-625rp": Phase="Running", Reason="", readiness=true. Elapsed: 19.852886ms
Jul 25 10:58:41.076: INFO: Pod "adopt-release-625rp": Phase="Running", Reason="", readiness=true. Elapsed: 2.024303457s
Jul 25 10:58:41.076: INFO: Pod "adopt-release-625rp" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jul 25 10:58:41.637: INFO: Successfully updated pod "adopt-release-625rp"
STEP: Checking that the Job releases the Pod
Jul 25 10:58:41.637: INFO: Waiting up to 15m0s for pod "adopt-release-625rp" in namespace "job-4529" to be "released"
Jul 25 10:58:41.644: INFO: Pod "adopt-release-625rp": Phase="Running", Reason="", readiness=true. Elapsed: 7.135385ms
Jul 25 10:58:43.854: INFO: Pod "adopt-release-625rp": Phase="Running", Reason="", readiness=true. Elapsed: 2.216931453s
Jul 25 10:58:43.854: INFO: Pod "adopt-release-625rp" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:58:43.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4529" for this suite.

• [SLOW TEST:12.079 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":104,"skipped":2009,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:58:43.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:59:01.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8181" for this suite.

• [SLOW TEST:17.483 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":105,"skipped":2010,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:59:01.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-4dcb9f80-ea1c-471a-9e95-f5839bd2843b in namespace container-probe-2433
Jul 25 10:59:05.511: INFO: Started pod liveness-4dcb9f80-ea1c-471a-9e95-f5839bd2843b in namespace container-probe-2433
STEP: checking the pod's current state and verifying that restartCount is present
Jul 25 10:59:05.513: INFO: Initial restart count of pod liveness-4dcb9f80-ea1c-471a-9e95-f5839bd2843b is 0
Jul 25 10:59:27.770: INFO: Restart count of pod container-probe-2433/liveness-4dcb9f80-ea1c-471a-9e95-f5839bd2843b is now 1 (22.257059131s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:59:28.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2433" for this suite.

• [SLOW TEST:28.104 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":2011,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:59:29.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Jul 25 10:59:31.155: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jul 25 10:59:31.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8347'
Jul 25 10:59:32.876: INFO: stderr: ""
Jul 25 10:59:32.876: INFO: stdout: "service/agnhost-slave created\n"
Jul 25 10:59:32.876: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jul 25 10:59:32.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8347'
Jul 25 10:59:33.848: INFO: stderr: ""
Jul 25 10:59:33.848: INFO: stdout: "service/agnhost-master created\n"
Jul 25 10:59:33.848: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul 25 10:59:33.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8347'
Jul 25 10:59:34.209: INFO: stderr: ""
Jul 25 10:59:34.209: INFO: stdout: "service/frontend created\n"
Jul 25 10:59:34.209: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jul 25 10:59:34.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8347'
Jul 25 10:59:34.456: INFO: stderr: ""
Jul 25 10:59:34.456: INFO: stdout: "deployment.apps/frontend created\n"
Jul 25 10:59:34.457: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 25 10:59:34.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8347'
Jul 25 10:59:34.812: INFO: stderr: ""
Jul 25 10:59:34.812: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jul 25 10:59:34.812: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 25 10:59:34.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8347'
Jul 25 10:59:35.168: INFO: stderr: ""
Jul 25 10:59:35.168: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jul 25 10:59:35.168: INFO: Waiting for all frontend pods to be Running.
Jul 25 10:59:50.219: INFO: Waiting for frontend to serve content.
Jul 25 10:59:50.227: INFO: Trying to add a new entry to the guestbook.
Jul 25 10:59:50.236: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul 25 10:59:50.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8347'
Jul 25 10:59:51.528: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 10:59:51.528: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul 25 10:59:51.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8347'
Jul 25 10:59:53.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 10:59:53.147: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 25 10:59:53.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8347'
Jul 25 10:59:54.186: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 10:59:54.187: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 25 10:59:54.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8347'
Jul 25 10:59:54.571: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 10:59:54.571: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 25 10:59:54.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8347'
Jul 25 10:59:55.914: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 10:59:55.914: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 25 10:59:55.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8347'
Jul 25 10:59:57.375: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 10:59:57.375: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 10:59:57.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8347" for this suite.

• [SLOW TEST:29.517 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":107,"skipped":2019,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 10:59:59.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-d3f4ae97-16bf-4c6f-9e5f-1abc6bba9062
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:00:16.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6347" for this suite.

• [SLOW TEST:17.948 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":2023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:00:16.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9009, will wait for the garbage collector to delete the pods
Jul 25 11:00:23.427: INFO: Deleting Job.batch foo took: 225.050565ms
Jul 25 11:00:23.827: INFO: Terminating Job.batch foo pods took: 400.254516ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:01:03.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9009" for this suite.

• [SLOW TEST:46.583 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":109,"skipped":2068,"failed":0}
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:01:03.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:01:03.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20d9fe7b-909d-4ac5-95ef-6020942bf310" in namespace "downward-api-4939" to be "Succeeded or Failed"
Jul 25 11:01:03.613: INFO: Pod "downwardapi-volume-20d9fe7b-909d-4ac5-95ef-6020942bf310": Phase="Pending", Reason="", readiness=false. Elapsed: 20.291593ms
Jul 25 11:01:05.617: INFO: Pod "downwardapi-volume-20d9fe7b-909d-4ac5-95ef-6020942bf310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02416435s
Jul 25 11:01:07.620: INFO: Pod "downwardapi-volume-20d9fe7b-909d-4ac5-95ef-6020942bf310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027924421s
STEP: Saw pod success
Jul 25 11:01:07.620: INFO: Pod "downwardapi-volume-20d9fe7b-909d-4ac5-95ef-6020942bf310" satisfied condition "Succeeded or Failed"
Jul 25 11:01:07.624: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-20d9fe7b-909d-4ac5-95ef-6020942bf310 container client-container: 
STEP: delete the pod
Jul 25 11:01:07.683: INFO: Waiting for pod downwardapi-volume-20d9fe7b-909d-4ac5-95ef-6020942bf310 to disappear
Jul 25 11:01:07.697: INFO: Pod downwardapi-volume-20d9fe7b-909d-4ac5-95ef-6020942bf310 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:01:07.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4939" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":2068,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:01:07.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7685
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7685
STEP: creating replication controller externalsvc in namespace services-7685
I0725 11:01:08.100894       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7685, replica count: 2
I0725 11:01:11.151324       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 11:01:14.151586       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jul 25 11:01:14.233: INFO: Creating new exec pod
Jul 25 11:01:18.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7685 execpod8q5kr -- /bin/sh -x -c nslookup nodeport-service'
Jul 25 11:01:18.500: INFO: stderr: "I0725 11:01:18.406446    2159 log.go:172] (0xc0009d0fd0) (0xc000b246e0) Create stream\nI0725 11:01:18.406501    2159 log.go:172] (0xc0009d0fd0) (0xc000b246e0) Stream added, broadcasting: 1\nI0725 11:01:18.409482    2159 log.go:172] (0xc0009d0fd0) Reply frame received for 1\nI0725 11:01:18.409533    2159 log.go:172] (0xc0009d0fd0) (0xc000ac80a0) Create stream\nI0725 11:01:18.409550    2159 log.go:172] (0xc0009d0fd0) (0xc000ac80a0) Stream added, broadcasting: 3\nI0725 11:01:18.410613    2159 log.go:172] (0xc0009d0fd0) Reply frame received for 3\nI0725 11:01:18.410654    2159 log.go:172] (0xc0009d0fd0) (0xc000b24780) Create stream\nI0725 11:01:18.410667    2159 log.go:172] (0xc0009d0fd0) (0xc000b24780) Stream added, broadcasting: 5\nI0725 11:01:18.411563    2159 log.go:172] (0xc0009d0fd0) Reply frame received for 5\nI0725 11:01:18.477352    2159 log.go:172] (0xc0009d0fd0) Data frame received for 5\nI0725 11:01:18.477379    2159 log.go:172] (0xc000b24780) (5) Data frame handling\nI0725 11:01:18.477399    2159 log.go:172] (0xc000b24780) (5) Data frame sent\n+ nslookup nodeport-service\nI0725 11:01:18.491020    2159 log.go:172] (0xc0009d0fd0) Data frame received for 3\nI0725 11:01:18.491045    2159 log.go:172] (0xc000ac80a0) (3) Data frame handling\nI0725 11:01:18.491066    2159 log.go:172] (0xc000ac80a0) (3) Data frame sent\nI0725 11:01:18.492169    2159 log.go:172] (0xc0009d0fd0) Data frame received for 3\nI0725 11:01:18.492180    2159 log.go:172] (0xc000ac80a0) (3) Data frame handling\nI0725 11:01:18.492187    2159 log.go:172] (0xc000ac80a0) (3) Data frame sent\nI0725 11:01:18.492594    2159 log.go:172] (0xc0009d0fd0) Data frame received for 5\nI0725 11:01:18.492603    2159 log.go:172] (0xc000b24780) (5) Data frame handling\nI0725 11:01:18.492712    2159 log.go:172] (0xc0009d0fd0) Data frame received for 3\nI0725 11:01:18.492828    2159 log.go:172] (0xc000ac80a0) (3) Data frame handling\nI0725 11:01:18.494648    2159 log.go:172] (0xc0009d0fd0) Data frame received for 1\nI0725 11:01:18.494664    2159 log.go:172] (0xc000b246e0) (1) Data frame handling\nI0725 11:01:18.494675    2159 log.go:172] (0xc000b246e0) (1) Data frame sent\nI0725 11:01:18.494690    2159 log.go:172] (0xc0009d0fd0) (0xc000b246e0) Stream removed, broadcasting: 1\nI0725 11:01:18.494932    2159 log.go:172] (0xc0009d0fd0) Go away received\nI0725 11:01:18.495102    2159 log.go:172] (0xc0009d0fd0) (0xc000b246e0) Stream removed, broadcasting: 1\nI0725 11:01:18.495115    2159 log.go:172] (0xc0009d0fd0) (0xc000ac80a0) Stream removed, broadcasting: 3\nI0725 11:01:18.495120    2159 log.go:172] (0xc0009d0fd0) (0xc000b24780) Stream removed, broadcasting: 5\n"
Jul 25 11:01:18.500: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7685.svc.cluster.local\tcanonical name = externalsvc.services-7685.svc.cluster.local.\nName:\texternalsvc.services-7685.svc.cluster.local\nAddress: 10.102.49.67\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7685, will wait for the garbage collector to delete the pods
Jul 25 11:01:18.561: INFO: Deleting ReplicationController externalsvc took: 6.74097ms
Jul 25 11:01:18.861: INFO: Terminating ReplicationController externalsvc pods took: 300.26818ms
Jul 25 11:01:33.642: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:01:33.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7685" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:25.983 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":111,"skipped":2070,"failed":0}
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:01:33.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Jul 25 11:01:33.850: INFO: Waiting up to 5m0s for pod "client-containers-dc847b0d-9f22-4638-95f3-b2464f5e2f0f" in namespace "containers-6213" to be "Succeeded or Failed"
Jul 25 11:01:33.867: INFO: Pod "client-containers-dc847b0d-9f22-4638-95f3-b2464f5e2f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.302909ms
Jul 25 11:01:35.938: INFO: Pod "client-containers-dc847b0d-9f22-4638-95f3-b2464f5e2f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087453383s
Jul 25 11:01:37.945: INFO: Pod "client-containers-dc847b0d-9f22-4638-95f3-b2464f5e2f0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094499671s
STEP: Saw pod success
Jul 25 11:01:37.945: INFO: Pod "client-containers-dc847b0d-9f22-4638-95f3-b2464f5e2f0f" satisfied condition "Succeeded or Failed"
Jul 25 11:01:37.948: INFO: Trying to get logs from node kali-worker pod client-containers-dc847b0d-9f22-4638-95f3-b2464f5e2f0f container test-container: 
STEP: delete the pod
Jul 25 11:01:37.977: INFO: Waiting for pod client-containers-dc847b0d-9f22-4638-95f3-b2464f5e2f0f to disappear
Jul 25 11:01:37.993: INFO: Pod client-containers-dc847b0d-9f22-4638-95f3-b2464f5e2f0f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:01:37.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6213" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":2076,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:01:38.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-dbcf06ee-e138-4314-a403-3942f5d8f686
STEP: Creating a pod to test consume secrets
Jul 25 11:01:38.094: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60" in namespace "projected-3004" to be "Succeeded or Failed"
Jul 25 11:01:38.109: INFO: Pod "pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60": Phase="Pending", Reason="", readiness=false. Elapsed: 15.141169ms
Jul 25 11:01:40.113: INFO: Pod "pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019287204s
Jul 25 11:01:42.117: INFO: Pod "pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023242386s
Jul 25 11:01:44.121: INFO: Pod "pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027074922s
STEP: Saw pod success
Jul 25 11:01:44.121: INFO: Pod "pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60" satisfied condition "Succeeded or Failed"
Jul 25 11:01:44.124: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60 container secret-volume-test: 
STEP: delete the pod
Jul 25 11:01:44.317: INFO: Waiting for pod pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60 to disappear
Jul 25 11:01:44.424: INFO: Pod pod-projected-secrets-03581ba9-5dd2-4d06-9881-ee45f377fa60 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:01:44.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3004" for this suite.

• [SLOW TEST:6.431 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":2092,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:01:44.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-5908
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5908
STEP: Deleting pre-stop pod
Jul 25 11:01:59.736: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:01:59.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5908" for this suite.

• [SLOW TEST:15.381 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":114,"skipped":2111,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:01:59.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 25 11:02:04.191: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:02:04.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8870" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":2113,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:02:04.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jul 25 11:02:04.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:02:21.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3450" for this suite.

• [SLOW TEST:16.948 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":116,"skipped":2116,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:02:21.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-1561
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 25 11:02:21.382: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 25 11:02:21.424: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:02:23.783: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:02:25.603: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:27.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:29.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:31.448: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:33.436: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:35.427: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:37.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:39.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:41.427: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:02:43.544: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 25 11:02:43.549: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 25 11:02:47.639: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.230:8080/dial?request=hostname&protocol=http&host=10.244.2.229&port=8080&tries=1'] Namespace:pod-network-test-1561 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 11:02:47.639: INFO: >>> kubeConfig: /root/.kube/config
I0725 11:02:47.674912       7 log.go:172] (0xc0054b3d90) (0xc002af2e60) Create stream
I0725 11:02:47.674942       7 log.go:172] (0xc0054b3d90) (0xc002af2e60) Stream added, broadcasting: 1
I0725 11:02:47.676834       7 log.go:172] (0xc0054b3d90) Reply frame received for 1
I0725 11:02:47.676897       7 log.go:172] (0xc0054b3d90) (0xc002ed1400) Create stream
I0725 11:02:47.676924       7 log.go:172] (0xc0054b3d90) (0xc002ed1400) Stream added, broadcasting: 3
I0725 11:02:47.677722       7 log.go:172] (0xc0054b3d90) Reply frame received for 3
I0725 11:02:47.677754       7 log.go:172] (0xc0054b3d90) (0xc0016988c0) Create stream
I0725 11:02:47.677764       7 log.go:172] (0xc0054b3d90) (0xc0016988c0) Stream added, broadcasting: 5
I0725 11:02:47.678376       7 log.go:172] (0xc0054b3d90) Reply frame received for 5
I0725 11:02:47.742417       7 log.go:172] (0xc0054b3d90) Data frame received for 3
I0725 11:02:47.742443       7 log.go:172] (0xc002ed1400) (3) Data frame handling
I0725 11:02:47.742456       7 log.go:172] (0xc002ed1400) (3) Data frame sent
I0725 11:02:47.743172       7 log.go:172] (0xc0054b3d90) Data frame received for 3
I0725 11:02:47.743213       7 log.go:172] (0xc002ed1400) (3) Data frame handling
I0725 11:02:47.743419       7 log.go:172] (0xc0054b3d90) Data frame received for 5
I0725 11:02:47.743430       7 log.go:172] (0xc0016988c0) (5) Data frame handling
I0725 11:02:47.747946       7 log.go:172] (0xc0054b3d90) Data frame received for 1
I0725 11:02:47.747961       7 log.go:172] (0xc002af2e60) (1) Data frame handling
I0725 11:02:47.747975       7 log.go:172] (0xc002af2e60) (1) Data frame sent
I0725 11:02:47.747993       7 log.go:172] (0xc0054b3d90) (0xc002af2e60) Stream removed, broadcasting: 1
I0725 11:02:47.748002       7 log.go:172] (0xc0054b3d90) Go away received
I0725 11:02:47.748128       7 log.go:172] (0xc0054b3d90) (0xc002af2e60) Stream removed, broadcasting: 1
I0725 11:02:47.748152       7 log.go:172] (0xc0054b3d90) (0xc002ed1400) Stream removed, broadcasting: 3
I0725 11:02:47.748171       7 log.go:172] (0xc0054b3d90) (0xc0016988c0) Stream removed, broadcasting: 5
Jul 25 11:02:47.748: INFO: Waiting for responses: map[]
Jul 25 11:02:47.751: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.230:8080/dial?request=hostname&protocol=http&host=10.244.1.79&port=8080&tries=1'] Namespace:pod-network-test-1561 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 11:02:47.751: INFO: >>> kubeConfig: /root/.kube/config
I0725 11:02:47.776332       7 log.go:172] (0xc0040844d0) (0xc002af3360) Create stream
I0725 11:02:47.776361       7 log.go:172] (0xc0040844d0) (0xc002af3360) Stream added, broadcasting: 1
I0725 11:02:47.778344       7 log.go:172] (0xc0040844d0) Reply frame received for 1
I0725 11:02:47.778369       7 log.go:172] (0xc0040844d0) (0xc002af34a0) Create stream
I0725 11:02:47.778378       7 log.go:172] (0xc0040844d0) (0xc002af34a0) Stream added, broadcasting: 3
I0725 11:02:47.779290       7 log.go:172] (0xc0040844d0) Reply frame received for 3
I0725 11:02:47.779335       7 log.go:172] (0xc0040844d0) (0xc000f80000) Create stream
I0725 11:02:47.779351       7 log.go:172] (0xc0040844d0) (0xc000f80000) Stream added, broadcasting: 5
I0725 11:02:47.780236       7 log.go:172] (0xc0040844d0) Reply frame received for 5
I0725 11:02:47.842666       7 log.go:172] (0xc0040844d0) Data frame received for 3
I0725 11:02:47.842691       7 log.go:172] (0xc002af34a0) (3) Data frame handling
I0725 11:02:47.842711       7 log.go:172] (0xc002af34a0) (3) Data frame sent
I0725 11:02:47.843243       7 log.go:172] (0xc0040844d0) Data frame received for 3
I0725 11:02:47.843264       7 log.go:172] (0xc002af34a0) (3) Data frame handling
I0725 11:02:47.843496       7 log.go:172] (0xc0040844d0) Data frame received for 5
I0725 11:02:47.843519       7 log.go:172] (0xc000f80000) (5) Data frame handling
I0725 11:02:47.845208       7 log.go:172] (0xc0040844d0) Data frame received for 1
I0725 11:02:47.845256       7 log.go:172] (0xc002af3360) (1) Data frame handling
I0725 11:02:47.845291       7 log.go:172] (0xc002af3360) (1) Data frame sent
I0725 11:02:47.845311       7 log.go:172] (0xc0040844d0) (0xc002af3360) Stream removed, broadcasting: 1
I0725 11:02:47.845334       7 log.go:172] (0xc0040844d0) Go away received
I0725 11:02:47.845465       7 log.go:172] (0xc0040844d0) (0xc002af3360) Stream removed, broadcasting: 1
I0725 11:02:47.845493       7 log.go:172] (0xc0040844d0) (0xc002af34a0) Stream removed, broadcasting: 3
I0725 11:02:47.845506       7 log.go:172] (0xc0040844d0) (0xc000f80000) Stream removed, broadcasting: 5
Jul 25 11:02:47.845: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:02:47.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1561" for this suite.

• [SLOW TEST:26.538 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2132,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:02:47.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:02:59.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5721" for this suite.

• [SLOW TEST:11.260 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":118,"skipped":2165,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:02:59.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:03:03.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2679" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":119,"skipped":2170,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:03:03.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:03:04.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2326" for this suite.
STEP: Destroying namespace "nspatchtest-d3d87b53-8013-414f-8ea4-84bf811fa6b8-4677" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":120,"skipped":2177,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:03:04.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-69620601-a3c2-43e4-a2e2-b630400bd367
STEP: Creating a pod to test consume secrets
Jul 25 11:03:04.503: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0288d93c-4f8a-460a-b555-f9371fcaba40" in namespace "projected-5405" to be "Succeeded or Failed"
Jul 25 11:03:04.587: INFO: Pod "pod-projected-secrets-0288d93c-4f8a-460a-b555-f9371fcaba40": Phase="Pending", Reason="", readiness=false. Elapsed: 83.201389ms
Jul 25 11:03:06.591: INFO: Pod "pod-projected-secrets-0288d93c-4f8a-460a-b555-f9371fcaba40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087309403s
Jul 25 11:03:08.611: INFO: Pod "pod-projected-secrets-0288d93c-4f8a-460a-b555-f9371fcaba40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107498757s
STEP: Saw pod success
Jul 25 11:03:08.611: INFO: Pod "pod-projected-secrets-0288d93c-4f8a-460a-b555-f9371fcaba40" satisfied condition "Succeeded or Failed"
Jul 25 11:03:08.616: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-0288d93c-4f8a-460a-b555-f9371fcaba40 container projected-secret-volume-test: 
STEP: delete the pod
Jul 25 11:03:08.762: INFO: Waiting for pod pod-projected-secrets-0288d93c-4f8a-460a-b555-f9371fcaba40 to disappear
Jul 25 11:03:08.766: INFO: Pod pod-projected-secrets-0288d93c-4f8a-460a-b555-f9371fcaba40 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:03:08.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5405" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2214,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:03:08.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:03:25.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9792" for this suite.

• [SLOW TEST:16.319 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":122,"skipped":2225,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:03:25.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:03:25.292: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a128be0-e728-4d60-b165-0193995e2022" in namespace "projected-7053" to be "Succeeded or Failed"
Jul 25 11:03:25.317: INFO: Pod "downwardapi-volume-3a128be0-e728-4d60-b165-0193995e2022": Phase="Pending", Reason="", readiness=false. Elapsed: 25.468024ms
Jul 25 11:03:27.323: INFO: Pod "downwardapi-volume-3a128be0-e728-4d60-b165-0193995e2022": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030876926s
Jul 25 11:03:29.341: INFO: Pod "downwardapi-volume-3a128be0-e728-4d60-b165-0193995e2022": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049324009s
STEP: Saw pod success
Jul 25 11:03:29.341: INFO: Pod "downwardapi-volume-3a128be0-e728-4d60-b165-0193995e2022" satisfied condition "Succeeded or Failed"
Jul 25 11:03:29.345: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-3a128be0-e728-4d60-b165-0193995e2022 container client-container: 
STEP: delete the pod
Jul 25 11:03:29.391: INFO: Waiting for pod downwardapi-volume-3a128be0-e728-4d60-b165-0193995e2022 to disappear
Jul 25 11:03:29.401: INFO: Pod downwardapi-volume-3a128be0-e728-4d60-b165-0193995e2022 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:03:29.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7053" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2248,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:03:29.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 25 11:03:29.482: INFO: Waiting up to 5m0s for pod "pod-c4c64769-01f6-48fc-9356-a7ca686864ca" in namespace "emptydir-1118" to be "Succeeded or Failed"
Jul 25 11:03:29.496: INFO: Pod "pod-c4c64769-01f6-48fc-9356-a7ca686864ca": Phase="Pending", Reason="", readiness=false. Elapsed: 13.496094ms
Jul 25 11:03:31.499: INFO: Pod "pod-c4c64769-01f6-48fc-9356-a7ca686864ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017252219s
Jul 25 11:03:33.504: INFO: Pod "pod-c4c64769-01f6-48fc-9356-a7ca686864ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02147696s
STEP: Saw pod success
Jul 25 11:03:33.504: INFO: Pod "pod-c4c64769-01f6-48fc-9356-a7ca686864ca" satisfied condition "Succeeded or Failed"
Jul 25 11:03:33.507: INFO: Trying to get logs from node kali-worker pod pod-c4c64769-01f6-48fc-9356-a7ca686864ca container test-container: 
STEP: delete the pod
Jul 25 11:03:33.543: INFO: Waiting for pod pod-c4c64769-01f6-48fc-9356-a7ca686864ca to disappear
Jul 25 11:03:33.580: INFO: Pod pod-c4c64769-01f6-48fc-9356-a7ca686864ca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:03:33.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1118" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2256,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:03:33.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:03:33.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-847" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":125,"skipped":2269,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:03:33.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0725 11:03:43.839188       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 25 11:03:43.839: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:03:43.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8431" for this suite.

• [SLOW TEST:10.128 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":126,"skipped":2272,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:03:43.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-sdwr
STEP: Creating a pod to test atomic-volume-subpath
Jul 25 11:03:43.973: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sdwr" in namespace "subpath-6239" to be "Succeeded or Failed"
Jul 25 11:03:44.005: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Pending", Reason="", readiness=false. Elapsed: 32.769974ms
Jul 25 11:03:46.071: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098812314s
Jul 25 11:03:48.076: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 4.103164901s
Jul 25 11:03:50.080: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 6.107427265s
Jul 25 11:03:52.084: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 8.111801081s
Jul 25 11:03:54.089: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 10.116426529s
Jul 25 11:03:56.094: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 12.121002824s
Jul 25 11:03:58.098: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 14.125014864s
Jul 25 11:04:00.102: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 16.129135893s
Jul 25 11:04:02.106: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 18.133613559s
Jul 25 11:04:04.111: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 20.138101626s
Jul 25 11:04:06.115: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 22.142086294s
Jul 25 11:04:08.119: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Running", Reason="", readiness=true. Elapsed: 24.146676374s
Jul 25 11:04:10.148: INFO: Pod "pod-subpath-test-configmap-sdwr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.175055705s
STEP: Saw pod success
Jul 25 11:04:10.148: INFO: Pod "pod-subpath-test-configmap-sdwr" satisfied condition "Succeeded or Failed"
Jul 25 11:04:10.150: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-sdwr container test-container-subpath-configmap-sdwr: 
STEP: delete the pod
Jul 25 11:04:10.265: INFO: Waiting for pod pod-subpath-test-configmap-sdwr to disappear
Jul 25 11:04:10.273: INFO: Pod pod-subpath-test-configmap-sdwr no longer exists
STEP: Deleting pod pod-subpath-test-configmap-sdwr
Jul 25 11:04:10.273: INFO: Deleting pod "pod-subpath-test-configmap-sdwr" in namespace "subpath-6239"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:04:10.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6239" for this suite.

• [SLOW TEST:26.435 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":127,"skipped":2288,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:04:10.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 25 11:04:10.348: INFO: Waiting up to 5m0s for pod "pod-b3840eff-8231-4a74-a64a-45b2a4536be2" in namespace "emptydir-107" to be "Succeeded or Failed"
Jul 25 11:04:10.563: INFO: Pod "pod-b3840eff-8231-4a74-a64a-45b2a4536be2": Phase="Pending", Reason="", readiness=false. Elapsed: 215.62403ms
Jul 25 11:04:12.568: INFO: Pod "pod-b3840eff-8231-4a74-a64a-45b2a4536be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220134481s
Jul 25 11:04:14.573: INFO: Pod "pod-b3840eff-8231-4a74-a64a-45b2a4536be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.224917957s
STEP: Saw pod success
Jul 25 11:04:14.573: INFO: Pod "pod-b3840eff-8231-4a74-a64a-45b2a4536be2" satisfied condition "Succeeded or Failed"
Jul 25 11:04:14.576: INFO: Trying to get logs from node kali-worker2 pod pod-b3840eff-8231-4a74-a64a-45b2a4536be2 container test-container: 
STEP: delete the pod
Jul 25 11:04:14.745: INFO: Waiting for pod pod-b3840eff-8231-4a74-a64a-45b2a4536be2 to disappear
Jul 25 11:04:14.752: INFO: Pod pod-b3840eff-8231-4a74-a64a-45b2a4536be2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:04:14.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-107" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2302,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:04:14.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-3872
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 25 11:04:14.829: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 25 11:04:14.930: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:04:16.934: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:04:18.952: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:20.934: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:22.934: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:24.934: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:26.946: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:28.938: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:30.934: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:32.933: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:34.945: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:36.933: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:04:38.935: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 25 11:04:38.941: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 25 11:04:45.008: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.234:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3872 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 11:04:45.008: INFO: >>> kubeConfig: /root/.kube/config
I0725 11:04:45.033720       7 log.go:172] (0xc0021b62c0) (0xc0010695e0) Create stream
I0725 11:04:45.033771       7 log.go:172] (0xc0021b62c0) (0xc0010695e0) Stream added, broadcasting: 1
I0725 11:04:45.038871       7 log.go:172] (0xc0021b62c0) Reply frame received for 1
I0725 11:04:45.038907       7 log.go:172] (0xc0021b62c0) (0xc0019aa000) Create stream
I0725 11:04:45.038922       7 log.go:172] (0xc0021b62c0) (0xc0019aa000) Stream added, broadcasting: 3
I0725 11:04:45.039745       7 log.go:172] (0xc0021b62c0) Reply frame received for 3
I0725 11:04:45.039766       7 log.go:172] (0xc0021b62c0) (0xc0016985a0) Create stream
I0725 11:04:45.039774       7 log.go:172] (0xc0021b62c0) (0xc0016985a0) Stream added, broadcasting: 5
I0725 11:04:45.040508       7 log.go:172] (0xc0021b62c0) Reply frame received for 5
I0725 11:04:45.098956       7 log.go:172] (0xc0021b62c0) Data frame received for 3
I0725 11:04:45.099061       7 log.go:172] (0xc0019aa000) (3) Data frame handling
I0725 11:04:45.099084       7 log.go:172] (0xc0019aa000) (3) Data frame sent
I0725 11:04:45.099099       7 log.go:172] (0xc0021b62c0) Data frame received for 3
I0725 11:04:45.099113       7 log.go:172] (0xc0019aa000) (3) Data frame handling
I0725 11:04:45.099220       7 log.go:172] (0xc0021b62c0) Data frame received for 5
I0725 11:04:45.099259       7 log.go:172] (0xc0016985a0) (5) Data frame handling
I0725 11:04:45.101090       7 log.go:172] (0xc0021b62c0) Data frame received for 1
I0725 11:04:45.101108       7 log.go:172] (0xc0010695e0) (1) Data frame handling
I0725 11:04:45.101119       7 log.go:172] (0xc0010695e0) (1) Data frame sent
I0725 11:04:45.101180       7 log.go:172] (0xc0021b62c0) (0xc0010695e0) Stream removed, broadcasting: 1
I0725 11:04:45.101258       7 log.go:172] (0xc0021b62c0) (0xc0010695e0) Stream removed, broadcasting: 1
I0725 11:04:45.101273       7 log.go:172] (0xc0021b62c0) (0xc0019aa000) Stream removed, broadcasting: 3
I0725 11:04:45.101458       7 log.go:172] (0xc0021b62c0) Go away received
I0725 11:04:45.101676       7 log.go:172] (0xc0021b62c0) (0xc0016985a0) Stream removed, broadcasting: 5
Jul 25 11:04:45.101: INFO: Found all expected endpoints: [netserver-0]
Jul 25 11:04:45.105: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.85:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3872 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 11:04:45.105: INFO: >>> kubeConfig: /root/.kube/config
I0725 11:04:45.136140       7 log.go:172] (0xc0027202c0) (0xc0019aa5a0) Create stream
I0725 11:04:45.136177       7 log.go:172] (0xc0027202c0) (0xc0019aa5a0) Stream added, broadcasting: 1
I0725 11:04:45.138551       7 log.go:172] (0xc0027202c0) Reply frame received for 1
I0725 11:04:45.138587       7 log.go:172] (0xc0027202c0) (0xc001069680) Create stream
I0725 11:04:45.138601       7 log.go:172] (0xc0027202c0) (0xc001069680) Stream added, broadcasting: 3
I0725 11:04:45.139899       7 log.go:172] (0xc0027202c0) Reply frame received for 3
I0725 11:04:45.139948       7 log.go:172] (0xc0027202c0) (0xc0019aa640) Create stream
I0725 11:04:45.139966       7 log.go:172] (0xc0027202c0) (0xc0019aa640) Stream added, broadcasting: 5
I0725 11:04:45.141215       7 log.go:172] (0xc0027202c0) Reply frame received for 5
I0725 11:04:45.215386       7 log.go:172] (0xc0027202c0) Data frame received for 3
I0725 11:04:45.215425       7 log.go:172] (0xc001069680) (3) Data frame handling
I0725 11:04:45.215441       7 log.go:172] (0xc001069680) (3) Data frame sent
I0725 11:04:45.215452       7 log.go:172] (0xc0027202c0) Data frame received for 3
I0725 11:04:45.215462       7 log.go:172] (0xc001069680) (3) Data frame handling
I0725 11:04:45.215486       7 log.go:172] (0xc0027202c0) Data frame received for 5
I0725 11:04:45.215497       7 log.go:172] (0xc0019aa640) (5) Data frame handling
I0725 11:04:45.217136       7 log.go:172] (0xc0027202c0) Data frame received for 1
I0725 11:04:45.217155       7 log.go:172] (0xc0019aa5a0) (1) Data frame handling
I0725 11:04:45.217163       7 log.go:172] (0xc0019aa5a0) (1) Data frame sent
I0725 11:04:45.217179       7 log.go:172] (0xc0027202c0) (0xc0019aa5a0) Stream removed, broadcasting: 1
I0725 11:04:45.217188       7 log.go:172] (0xc0027202c0) Go away received
I0725 11:04:45.217379       7 log.go:172] (0xc0027202c0) (0xc0019aa5a0) Stream removed, broadcasting: 1
I0725 11:04:45.217412       7 log.go:172] (0xc0027202c0) (0xc001069680) Stream removed, broadcasting: 3
I0725 11:04:45.217476       7 log.go:172] (0xc0027202c0) (0xc0019aa640) Stream removed, broadcasting: 5
Jul 25 11:04:45.217: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:04:45.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3872" for this suite.

• [SLOW TEST:30.463 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2366,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:04:45.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7007
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-7007
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7007
Jul 25 11:04:45.381: INFO: Found 0 stateful pods, waiting for 1
Jul 25 11:04:55.409: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul 25 11:04:55.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 25 11:04:58.930: INFO: stderr: "I0725 11:04:58.796799    2179 log.go:172] (0xc00090e9a0) (0xc000bdc280) Create stream\nI0725 11:04:58.796863    2179 log.go:172] (0xc00090e9a0) (0xc000bdc280) Stream added, broadcasting: 1\nI0725 11:04:58.799601    2179 log.go:172] (0xc00090e9a0) Reply frame received for 1\nI0725 11:04:58.799651    2179 log.go:172] (0xc00090e9a0) (0xc0008ba000) Create stream\nI0725 11:04:58.799668    2179 log.go:172] (0xc00090e9a0) (0xc0008ba000) Stream added, broadcasting: 3\nI0725 11:04:58.800644    2179 log.go:172] (0xc00090e9a0) Reply frame received for 3\nI0725 11:04:58.800674    2179 log.go:172] (0xc00090e9a0) (0xc0008ba0a0) Create stream\nI0725 11:04:58.800681    2179 log.go:172] (0xc00090e9a0) (0xc0008ba0a0) Stream added, broadcasting: 5\nI0725 11:04:58.801757    2179 log.go:172] (0xc00090e9a0) Reply frame received for 5\nI0725 11:04:58.894848    2179 log.go:172] (0xc00090e9a0) Data frame received for 5\nI0725 11:04:58.894887    2179 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0725 11:04:58.894912    2179 log.go:172] (0xc0008ba0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 11:04:58.922772    2179 log.go:172] (0xc00090e9a0) Data frame received for 5\nI0725 11:04:58.922937    2179 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0725 11:04:58.922981    2179 log.go:172] (0xc00090e9a0) Data frame received for 3\nI0725 11:04:58.923000    2179 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0725 11:04:58.923017    2179 log.go:172] (0xc0008ba000) (3) Data frame sent\nI0725 11:04:58.923034    2179 log.go:172] (0xc00090e9a0) Data frame received for 3\nI0725 11:04:58.923046    2179 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0725 11:04:58.924569    2179 log.go:172] (0xc00090e9a0) Data frame received for 1\nI0725 11:04:58.924588    2179 log.go:172] (0xc000bdc280) (1) Data frame handling\nI0725 11:04:58.924596    2179 log.go:172] (0xc000bdc280) (1) Data frame sent\nI0725 11:04:58.924604    2179 log.go:172] (0xc00090e9a0) (0xc000bdc280) Stream removed, broadcasting: 1\nI0725 11:04:58.924617    2179 log.go:172] (0xc00090e9a0) Go away received\nI0725 11:04:58.925047    2179 log.go:172] (0xc00090e9a0) (0xc000bdc280) Stream removed, broadcasting: 1\nI0725 11:04:58.925071    2179 log.go:172] (0xc00090e9a0) (0xc0008ba000) Stream removed, broadcasting: 3\nI0725 11:04:58.925083    2179 log.go:172] (0xc00090e9a0) (0xc0008ba0a0) Stream removed, broadcasting: 5\n"
Jul 25 11:04:58.930: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 25 11:04:58.930: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 25 11:04:58.934: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 25 11:05:08.937: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 25 11:05:08.937: INFO: Waiting for statefulset status.replicas updated to 0
Jul 25 11:05:08.991: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:08.991: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:08.991: INFO: 
Jul 25 11:05:08.991: INFO: StatefulSet ss has not reached scale 3, at 1
Jul 25 11:05:09.996: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.953250042s
Jul 25 11:05:11.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.947743948s
Jul 25 11:05:12.196: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.942596264s
Jul 25 11:05:13.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.748234399s
Jul 25 11:05:14.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.677537586s
Jul 25 11:05:15.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.672964806s
Jul 25 11:05:16.288: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.668434184s
Jul 25 11:05:17.292: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.655895551s
Jul 25 11:05:18.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 652.174011ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7007
Jul 25 11:05:19.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:05:19.601: INFO: stderr: "I0725 11:05:19.507477    2212 log.go:172] (0xc0009ea000) (0xc0009a0000) Create stream\nI0725 11:05:19.507535    2212 log.go:172] (0xc0009ea000) (0xc0009a0000) Stream added, broadcasting: 1\nI0725 11:05:19.511559    2212 log.go:172] (0xc0009ea000) Reply frame received for 1\nI0725 11:05:19.511631    2212 log.go:172] (0xc0009ea000) (0xc0009b2000) Create stream\nI0725 11:05:19.511661    2212 log.go:172] (0xc0009ea000) (0xc0009b2000) Stream added, broadcasting: 3\nI0725 11:05:19.513338    2212 log.go:172] (0xc0009ea000) Reply frame received for 3\nI0725 11:05:19.513378    2212 log.go:172] (0xc0009ea000) (0xc0009a0140) Create stream\nI0725 11:05:19.513388    2212 log.go:172] (0xc0009ea000) (0xc0009a0140) Stream added, broadcasting: 5\nI0725 11:05:19.514439    2212 log.go:172] (0xc0009ea000) Reply frame received for 5\nI0725 11:05:19.593130    2212 log.go:172] (0xc0009ea000) Data frame received for 3\nI0725 11:05:19.593176    2212 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0725 11:05:19.593208    2212 log.go:172] (0xc0009b2000) (3) Data frame sent\nI0725 11:05:19.593231    2212 log.go:172] (0xc0009ea000) Data frame received for 3\nI0725 11:05:19.593247    2212 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0725 11:05:19.593270    2212 log.go:172] (0xc0009ea000) Data frame received for 5\nI0725 11:05:19.593288    2212 log.go:172] (0xc0009a0140) (5) Data frame handling\nI0725 11:05:19.593310    2212 log.go:172] (0xc0009a0140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0725 11:05:19.593322    2212 log.go:172] (0xc0009ea000) Data frame received for 5\nI0725 11:05:19.593398    2212 log.go:172] (0xc0009a0140) (5) Data frame handling\nI0725 11:05:19.595044    2212 log.go:172] (0xc0009ea000) Data frame received for 1\nI0725 11:05:19.595068    2212 log.go:172] (0xc0009a0000) (1) Data frame handling\nI0725 11:05:19.595086    2212 log.go:172] (0xc0009a0000) (1) Data frame sent\nI0725 11:05:19.595106    2212 log.go:172] (0xc0009ea000) (0xc0009a0000) Stream removed, broadcasting: 1\nI0725 11:05:19.595592    2212 log.go:172] (0xc0009ea000) Go away received\nI0725 11:05:19.595804    2212 log.go:172] (0xc0009ea000) (0xc0009a0000) Stream removed, broadcasting: 1\nI0725 11:05:19.595830    2212 log.go:172] (0xc0009ea000) (0xc0009b2000) Stream removed, broadcasting: 3\nI0725 11:05:19.595838    2212 log.go:172] (0xc0009ea000) (0xc0009a0140) Stream removed, broadcasting: 5\n"
Jul 25 11:05:19.601: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 25 11:05:19.601: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 25 11:05:19.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:05:19.794: INFO: stderr: "I0725 11:05:19.722916    2232 log.go:172] (0xc0000e0370) (0xc00099e000) Create stream\nI0725 11:05:19.722998    2232 log.go:172] (0xc0000e0370) (0xc00099e000) Stream added, broadcasting: 1\nI0725 11:05:19.725959    2232 log.go:172] (0xc0000e0370) Reply frame received for 1\nI0725 11:05:19.726016    2232 log.go:172] (0xc0000e0370) (0xc000564000) Create stream\nI0725 11:05:19.726040    2232 log.go:172] (0xc0000e0370) (0xc000564000) Stream added, broadcasting: 3\nI0725 11:05:19.726906    2232 log.go:172] (0xc0000e0370) Reply frame received for 3\nI0725 11:05:19.726945    2232 log.go:172] (0xc0000e0370) (0xc000578000) Create stream\nI0725 11:05:19.726963    2232 log.go:172] (0xc0000e0370) (0xc000578000) Stream added, broadcasting: 5\nI0725 11:05:19.727871    2232 log.go:172] (0xc0000e0370) Reply frame received for 5\nI0725 11:05:19.788222    2232 log.go:172] (0xc0000e0370) Data frame received for 3\nI0725 11:05:19.788245    2232 log.go:172] (0xc000564000) (3) Data frame handling\nI0725 11:05:19.788257    2232 log.go:172] (0xc000564000) (3) Data frame sent\nI0725 11:05:19.788262    2232 log.go:172] (0xc0000e0370) Data frame received for 3\nI0725 11:05:19.788266    2232 log.go:172] (0xc000564000) (3) Data frame handling\nI0725 11:05:19.788379    2232 log.go:172] (0xc0000e0370) Data frame received for 5\nI0725 11:05:19.788414    2232 log.go:172] (0xc000578000) (5) Data frame handling\nI0725 11:05:19.788461    2232 log.go:172] (0xc000578000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0725 11:05:19.788491    2232 log.go:172] (0xc0000e0370) Data frame received for 5\nI0725 11:05:19.788510    2232 log.go:172] (0xc000578000) (5) Data frame handling\nI0725 11:05:19.790126    2232 log.go:172] (0xc0000e0370) Data frame received for 1\nI0725 11:05:19.790144    2232 log.go:172] (0xc00099e000) (1) Data frame handling\nI0725 11:05:19.790162    2232 log.go:172] (0xc00099e000) (1) Data frame sent\nI0725 11:05:19.790175    2232 log.go:172] (0xc0000e0370) (0xc00099e000) Stream removed, broadcasting: 1\nI0725 11:05:19.790204    2232 log.go:172] (0xc0000e0370) Go away received\nI0725 11:05:19.790456    2232 log.go:172] (0xc0000e0370) (0xc00099e000) Stream removed, broadcasting: 1\nI0725 11:05:19.790469    2232 log.go:172] (0xc0000e0370) (0xc000564000) Stream removed, broadcasting: 3\nI0725 11:05:19.790473    2232 log.go:172] (0xc0000e0370) (0xc000578000) Stream removed, broadcasting: 5\n"
Jul 25 11:05:19.794: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 25 11:05:19.794: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 25 11:05:19.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:05:20.004: INFO: stderr: "I0725 11:05:19.922549    2254 log.go:172] (0xc00098c370) (0xc0008d41e0) Create stream\nI0725 11:05:19.922618    2254 log.go:172] (0xc00098c370) (0xc0008d41e0) Stream added, broadcasting: 1\nI0725 11:05:19.925508    2254 log.go:172] (0xc00098c370) Reply frame received for 1\nI0725 11:05:19.925542    2254 log.go:172] (0xc00098c370) (0xc000a06000) Create stream\nI0725 11:05:19.925552    2254 log.go:172] (0xc00098c370) (0xc000a06000) Stream added, broadcasting: 3\nI0725 11:05:19.926604    2254 log.go:172] (0xc00098c370) Reply frame received for 3\nI0725 11:05:19.926635    2254 log.go:172] (0xc00098c370) (0xc0008d4320) Create stream\nI0725 11:05:19.926646    2254 log.go:172] (0xc00098c370) (0xc0008d4320) Stream added, broadcasting: 5\nI0725 11:05:19.927436    2254 log.go:172] (0xc00098c370) Reply frame received for 5\nI0725 11:05:19.997644    2254 log.go:172] (0xc00098c370) Data frame received for 5\nI0725 11:05:19.997686    2254 log.go:172] (0xc0008d4320) (5) Data frame handling\nI0725 11:05:19.997713    2254 log.go:172] (0xc0008d4320) (5) Data frame sent\nI0725 11:05:19.997740    2254 log.go:172] (0xc00098c370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0725 11:05:19.997766    2254 log.go:172] (0xc0008d4320) (5) Data frame handling\nI0725 11:05:19.997804    2254 log.go:172] (0xc00098c370) Data frame received for 3\nI0725 11:05:19.997833    2254 log.go:172] (0xc000a06000) (3) Data frame handling\nI0725 11:05:19.997858    2254 log.go:172] (0xc000a06000) (3) Data frame sent\nI0725 11:05:19.997872    2254 log.go:172] (0xc00098c370) Data frame received for 3\nI0725 11:05:19.997885    2254 log.go:172] (0xc000a06000) (3) Data frame handling\nI0725 11:05:19.998853    2254 log.go:172] (0xc00098c370) Data frame received for 1\nI0725 11:05:19.998872    2254 log.go:172] (0xc0008d41e0) (1) Data frame handling\nI0725 11:05:19.998882    2254 log.go:172] (0xc0008d41e0) (1) Data frame sent\nI0725 11:05:19.999000    2254 log.go:172] (0xc00098c370) (0xc0008d41e0) Stream removed, broadcasting: 1\nI0725 11:05:19.999029    2254 log.go:172] (0xc00098c370) Go away received\nI0725 11:05:19.999375    2254 log.go:172] (0xc00098c370) (0xc0008d41e0) Stream removed, broadcasting: 1\nI0725 11:05:19.999408    2254 log.go:172] (0xc00098c370) (0xc000a06000) Stream removed, broadcasting: 3\nI0725 11:05:19.999417    2254 log.go:172] (0xc00098c370) (0xc0008d4320) Stream removed, broadcasting: 5\n"
Jul 25 11:05:20.004: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 25 11:05:20.004: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 25 11:05:20.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jul 25 11:05:30.040: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:05:30.040: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:05:30.040: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul 25 11:05:30.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 25 11:05:30.237: INFO: stderr: "I0725 11:05:30.165496    2276 log.go:172] (0xc0009353f0) (0xc000a3a820) Create stream\nI0725 11:05:30.165562    2276 log.go:172] (0xc0009353f0) (0xc000a3a820) Stream added, broadcasting: 1\nI0725 11:05:30.171721    2276 log.go:172] (0xc0009353f0) Reply frame received for 1\nI0725 11:05:30.171764    2276 log.go:172] (0xc0009353f0) (0xc000a3a000) Create stream\nI0725 11:05:30.171774    2276 log.go:172] (0xc0009353f0) (0xc000a3a000) Stream added, broadcasting: 3\nI0725 11:05:30.172590    2276 log.go:172] (0xc0009353f0) Reply frame received for 3\nI0725 11:05:30.172609    2276 log.go:172] (0xc0009353f0) (0xc00057d680) Create stream\nI0725 11:05:30.172626    2276 log.go:172] (0xc0009353f0) (0xc00057d680) Stream added, broadcasting: 5\nI0725 11:05:30.173592    2276 log.go:172] (0xc0009353f0) Reply frame received for 5\nI0725 11:05:30.232554    2276 log.go:172] (0xc0009353f0) Data frame received for 5\nI0725 11:05:30.232589    2276 log.go:172] (0xc00057d680) (5) Data frame handling\nI0725 11:05:30.232608    2276 log.go:172] (0xc00057d680) (5) Data frame sent\nI0725 11:05:30.232617    2276 log.go:172] (0xc0009353f0) Data frame received for 5\nI0725 11:05:30.232623    2276 log.go:172] (0xc00057d680) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 11:05:30.232652    2276 log.go:172] (0xc0009353f0) Data frame received for 3\nI0725 11:05:30.232688    2276 log.go:172] (0xc000a3a000) (3) Data frame handling\nI0725 11:05:30.232823    2276 log.go:172] (0xc000a3a000) (3) Data frame sent\nI0725 11:05:30.232859    2276 log.go:172] (0xc0009353f0) Data frame received for 3\nI0725 11:05:30.232876    2276 log.go:172] (0xc000a3a000) (3) Data frame handling\nI0725 11:05:30.234053    2276 log.go:172] (0xc0009353f0) Data frame received for 1\nI0725 11:05:30.234077    2276 log.go:172] (0xc000a3a820) (1) Data frame handling\nI0725 11:05:30.234099    2276 log.go:172] (0xc000a3a820) (1) Data frame sent\nI0725 11:05:30.234130    2276 log.go:172] (0xc0009353f0) (0xc000a3a820) Stream removed, broadcasting: 1\nI0725 11:05:30.234154    2276 log.go:172] (0xc0009353f0) Go away received\nI0725 11:05:30.234385    2276 log.go:172] (0xc0009353f0) (0xc000a3a820) Stream removed, broadcasting: 1\nI0725 11:05:30.234399    2276 log.go:172] (0xc0009353f0) (0xc000a3a000) Stream removed, broadcasting: 3\nI0725 11:05:30.234405    2276 log.go:172] (0xc0009353f0) (0xc00057d680) Stream removed, broadcasting: 5\n"
Jul 25 11:05:30.238: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 25 11:05:30.238: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 25 11:05:30.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 25 11:05:30.489: INFO: stderr: "I0725 11:05:30.366369    2296 log.go:172] (0xc0002bc840) (0xc00046e320) Create stream\nI0725 11:05:30.366426    2296 log.go:172] (0xc0002bc840) (0xc00046e320) Stream added, broadcasting: 1\nI0725 11:05:30.369221    2296 log.go:172] (0xc0002bc840) Reply frame received for 1\nI0725 11:05:30.369293    2296 log.go:172] (0xc0002bc840) (0xc00087c000) Create stream\nI0725 11:05:30.369315    2296 log.go:172] (0xc0002bc840) (0xc00087c000) Stream added, broadcasting: 3\nI0725 11:05:30.370702    2296 log.go:172] (0xc0002bc840) Reply frame received for 3\nI0725 11:05:30.370740    2296 log.go:172] (0xc0002bc840) (0xc0004c6aa0) Create stream\nI0725 11:05:30.370760    2296 log.go:172] (0xc0002bc840) (0xc0004c6aa0) Stream added, broadcasting: 5\nI0725 11:05:30.371997    2296 log.go:172] (0xc0002bc840) Reply frame received for 5\nI0725 11:05:30.438977    2296 log.go:172] (0xc0002bc840) Data frame received for 5\nI0725 11:05:30.439008    2296 log.go:172] (0xc0004c6aa0) (5) Data frame handling\nI0725 11:05:30.439023    2296 log.go:172] (0xc0004c6aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 11:05:30.482553    2296 log.go:172] (0xc0002bc840) Data frame received for 3\nI0725 11:05:30.482574    2296 log.go:172] (0xc00087c000) (3) Data frame handling\nI0725 11:05:30.482591    2296 log.go:172] (0xc00087c000) (3) Data frame sent\nI0725 11:05:30.482665    2296 log.go:172] (0xc0002bc840) Data frame received for 3\nI0725 11:05:30.482679    2296 log.go:172] (0xc00087c000) (3) Data frame handling\nI0725 11:05:30.482883    2296 log.go:172] (0xc0002bc840) Data frame received for 5\nI0725 11:05:30.482915    2296 log.go:172] (0xc0004c6aa0) (5) Data frame handling\nI0725 11:05:30.484480    2296 log.go:172] (0xc0002bc840) Data frame received for 1\nI0725 11:05:30.484497    2296 log.go:172] (0xc00046e320) (1) Data frame handling\nI0725 11:05:30.484507    2296 log.go:172] (0xc00046e320) (1) Data frame sent\nI0725 11:05:30.484518    2296 log.go:172] (0xc0002bc840) (0xc00046e320) Stream removed, broadcasting: 1\nI0725 11:05:30.484864    2296 log.go:172] (0xc0002bc840) Go away received\nI0725 11:05:30.484902    2296 log.go:172] (0xc0002bc840) (0xc00046e320) Stream removed, broadcasting: 1\nI0725 11:05:30.484924    2296 log.go:172] (0xc0002bc840) (0xc00087c000) Stream removed, broadcasting: 3\nI0725 11:05:30.484933    2296 log.go:172] (0xc0002bc840) (0xc0004c6aa0) Stream removed, broadcasting: 5\n"
Jul 25 11:05:30.489: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 25 11:05:30.489: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 25 11:05:30.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 25 11:05:30.770: INFO: stderr: "I0725 11:05:30.641452    2316 log.go:172] (0xc000b580b0) (0xc00051ab40) Create stream\nI0725 11:05:30.641507    2316 log.go:172] (0xc000b580b0) (0xc00051ab40) Stream added, broadcasting: 1\nI0725 11:05:30.646163    2316 log.go:172] (0xc000b580b0) Reply frame received for 1\nI0725 11:05:30.646266    2316 log.go:172] (0xc000b580b0) (0xc00097e000) Create stream\nI0725 11:05:30.646291    2316 log.go:172] (0xc000b580b0) (0xc00097e000) Stream added, broadcasting: 3\nI0725 11:05:30.647555    2316 log.go:172] (0xc000b580b0) Reply frame received for 3\nI0725 11:05:30.647596    2316 log.go:172] (0xc000b580b0) (0xc0006e52c0) Create stream\nI0725 11:05:30.647610    2316 log.go:172] (0xc000b580b0) (0xc0006e52c0) Stream added, broadcasting: 5\nI0725 11:05:30.648529    2316 log.go:172] (0xc000b580b0) Reply frame received for 5\nI0725 11:05:30.708217    2316 log.go:172] (0xc000b580b0) Data frame received for 5\nI0725 11:05:30.708247    2316 log.go:172] (0xc0006e52c0) (5) Data frame handling\nI0725 11:05:30.708270    2316 log.go:172] (0xc0006e52c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 11:05:30.761697    2316 log.go:172] (0xc000b580b0) Data frame received for 3\nI0725 11:05:30.761725    2316 log.go:172] (0xc00097e000) (3) Data frame handling\nI0725 11:05:30.761750    2316 log.go:172] (0xc00097e000) (3) Data frame sent\nI0725 11:05:30.761918    2316 log.go:172] (0xc000b580b0) Data frame received for 3\nI0725 11:05:30.761960    2316 log.go:172] (0xc00097e000) (3) Data frame handling\nI0725 11:05:30.761989    2316 log.go:172] (0xc000b580b0) Data frame received for 5\nI0725 11:05:30.762003    2316 log.go:172] (0xc0006e52c0) (5) Data frame handling\nI0725 11:05:30.763543    2316 log.go:172] (0xc000b580b0) Data frame received for 1\nI0725 11:05:30.763556    2316 log.go:172] (0xc00051ab40) (1) Data frame handling\nI0725 11:05:30.763563    2316 log.go:172] (0xc00051ab40) (1) Data frame sent\nI0725 11:05:30.763571    2316 log.go:172] (0xc000b580b0) (0xc00051ab40) Stream removed, broadcasting: 1\nI0725 11:05:30.763823    2316 log.go:172] (0xc000b580b0) (0xc00051ab40) Stream removed, broadcasting: 1\nI0725 11:05:30.763834    2316 log.go:172] (0xc000b580b0) (0xc00097e000) Stream removed, broadcasting: 3\nI0725 11:05:30.763930    2316 log.go:172] (0xc000b580b0) Go away received\nI0725 11:05:30.763964    2316 log.go:172] (0xc000b580b0) (0xc0006e52c0) Stream removed, broadcasting: 5\n"
Jul 25 11:05:30.770: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 25 11:05:30.770: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 25 11:05:30.770: INFO: Waiting for statefulset status.replicas updated to 0
Jul 25 11:05:30.773: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul 25 11:05:40.782: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 25 11:05:40.782: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 25 11:05:40.782: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 25 11:05:40.798: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:40.798: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:40.798: INFO: ss-1  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:08 +0000 UTC  }]
Jul 25 11:05:40.798: INFO: ss-2  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:40.798: INFO: 
Jul 25 11:05:40.798: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 25 11:05:41.930: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:41.930: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:41.930: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:08 +0000 UTC  }]
Jul 25 11:05:41.930: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:41.930: INFO: 
Jul 25 11:05:41.930: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 25 11:05:42.953: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:42.953: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:42.953: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:08 +0000 UTC  }]
Jul 25 11:05:42.953: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:42.953: INFO: 
Jul 25 11:05:42.953: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 25 11:05:43.966: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:43.966: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:43.966: INFO: ss-1  kali-worker   Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:08 +0000 UTC  }]
Jul 25 11:05:43.966: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:43.966: INFO: 
Jul 25 11:05:43.966: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 25 11:05:44.970: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:44.970: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:44.970: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:44.970: INFO: 
Jul 25 11:05:44.970: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 25 11:05:45.983: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:45.983: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:45.983: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:45.983: INFO: 
Jul 25 11:05:45.983: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 25 11:05:46.988: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:46.988: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:46.988: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:46.988: INFO: 
Jul 25 11:05:46.988: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 25 11:05:47.993: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:47.993: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:47.993: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:47.993: INFO: 
Jul 25 11:05:47.993: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 25 11:05:48.997: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:48.997: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:48.998: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:48.998: INFO: 
Jul 25 11:05:48.998: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 25 11:05:50.003: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 25 11:05:50.003: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:04:45 +0000 UTC  }]
Jul 25 11:05:50.003: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-25 11:05:09 +0000 UTC  }]
Jul 25 11:05:50.003: INFO: 
Jul 25 11:05:50.003: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7007
Jul 25 11:05:51.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:05:51.140: INFO: rc: 1
Jul 25 11:05:51.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 25 11:06:01.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:06:01.238: INFO: rc: 1
Jul 25 11:06:01.238: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:06:11.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:06:11.340: INFO: rc: 1
Jul 25 11:06:11.340: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:06:21.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:06:21.430: INFO: rc: 1
Jul 25 11:06:21.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:06:31.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:06:31.522: INFO: rc: 1
Jul 25 11:06:31.522: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:06:41.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:06:41.626: INFO: rc: 1
Jul 25 11:06:41.626: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:06:51.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:06:51.736: INFO: rc: 1
Jul 25 11:06:51.736: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:07:01.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:07:01.823: INFO: rc: 1
Jul 25 11:07:01.824: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:07:11.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:07:11.933: INFO: rc: 1
Jul 25 11:07:11.933: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:07:21.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:07:22.053: INFO: rc: 1
Jul 25 11:07:22.053: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:07:32.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:07:32.163: INFO: rc: 1
Jul 25 11:07:32.163: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:07:42.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:07:42.252: INFO: rc: 1
Jul 25 11:07:42.252: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:07:52.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:07:52.345: INFO: rc: 1
Jul 25 11:07:52.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:08:02.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:08:02.439: INFO: rc: 1
Jul 25 11:08:02.439: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:08:12.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:08:12.537: INFO: rc: 1
Jul 25 11:08:12.537: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:08:22.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:08:22.705: INFO: rc: 1
Jul 25 11:08:22.705: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:08:32.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:08:32.855: INFO: rc: 1
Jul 25 11:08:32.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:08:42.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:08:42.961: INFO: rc: 1
Jul 25 11:08:42.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:08:52.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:08:53.057: INFO: rc: 1
Jul 25 11:08:53.057: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:09:03.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:09:03.160: INFO: rc: 1
Jul 25 11:09:03.160: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:09:13.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:09:13.272: INFO: rc: 1
Jul 25 11:09:13.272: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:09:23.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:09:23.369: INFO: rc: 1
Jul 25 11:09:23.369: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:09:33.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:09:33.464: INFO: rc: 1
Jul 25 11:09:33.464: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:09:43.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:09:43.564: INFO: rc: 1
Jul 25 11:09:43.564: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:09:53.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:09:53.658: INFO: rc: 1
Jul 25 11:09:53.658: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:10:03.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:10:03.761: INFO: rc: 1
Jul 25 11:10:03.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:10:13.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:10:13.856: INFO: rc: 1
Jul 25 11:10:13.856: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:10:23.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:10:23.947: INFO: rc: 1
Jul 25 11:10:23.947: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:10:33.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:10:34.055: INFO: rc: 1
Jul 25 11:10:34.055: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:10:44.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:10:44.159: INFO: rc: 1
Jul 25 11:10:44.159: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 25 11:10:54.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7007 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:10:54.266: INFO: rc: 1
Jul 25 11:10:54.266: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Jul 25 11:10:54.266: INFO: Scaling statefulset ss to 0
Jul 25 11:10:54.276: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 25 11:10:54.278: INFO: Deleting all statefulset in ns statefulset-7007
Jul 25 11:10:54.280: INFO: Scaling statefulset ss to 0
Jul 25 11:10:54.290: INFO: Waiting for statefulset status.replicas updated to 0
Jul 25 11:10:54.291: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:10:54.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7007" for this suite.

• [SLOW TEST:369.087 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":130,"skipped":2371,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:10:54.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-ba50131b-1d69-4a3e-ad9d-671add84dffa
STEP: Creating a pod to test consume configMaps
Jul 25 11:10:54.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1bf1d6b-d0f2-40ae-a9b6-bbe6109acad8" in namespace "configmap-1971" to be "Succeeded or Failed"
Jul 25 11:10:54.453: INFO: Pod "pod-configmaps-f1bf1d6b-d0f2-40ae-a9b6-bbe6109acad8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.760613ms
Jul 25 11:10:56.565: INFO: Pod "pod-configmaps-f1bf1d6b-d0f2-40ae-a9b6-bbe6109acad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143127023s
Jul 25 11:10:58.570: INFO: Pod "pod-configmaps-f1bf1d6b-d0f2-40ae-a9b6-bbe6109acad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147869406s
STEP: Saw pod success
Jul 25 11:10:58.570: INFO: Pod "pod-configmaps-f1bf1d6b-d0f2-40ae-a9b6-bbe6109acad8" satisfied condition "Succeeded or Failed"
Jul 25 11:10:58.573: INFO: Trying to get logs from node kali-worker pod pod-configmaps-f1bf1d6b-d0f2-40ae-a9b6-bbe6109acad8 container configmap-volume-test: 
STEP: delete the pod
Jul 25 11:10:58.676: INFO: Waiting for pod pod-configmaps-f1bf1d6b-d0f2-40ae-a9b6-bbe6109acad8 to disappear
Jul 25 11:10:58.937: INFO: Pod pod-configmaps-f1bf1d6b-d0f2-40ae-a9b6-bbe6109acad8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:10:58.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1971" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2385,"failed":0}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:10:58.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:11:07.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3552" for this suite.

• [SLOW TEST:8.112 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2387,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:11:07.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:11:07.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1737" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":133,"skipped":2462,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:11:07.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-91a2d5df-bd09-4aed-8e39-92e98757ab0a in namespace container-probe-7062
Jul 25 11:11:11.543: INFO: Started pod busybox-91a2d5df-bd09-4aed-8e39-92e98757ab0a in namespace container-probe-7062
STEP: checking the pod's current state and verifying that restartCount is present
Jul 25 11:11:11.546: INFO: Initial restart count of pod busybox-91a2d5df-bd09-4aed-8e39-92e98757ab0a is 0
Jul 25 11:11:57.883: INFO: Restart count of pod container-probe-7062/busybox-91a2d5df-bd09-4aed-8e39-92e98757ab0a is now 1 (46.336870976s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:11:57.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7062" for this suite.

• [SLOW TEST:50.782 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2468,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:11:58.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8300
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 25 11:11:58.084: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 25 11:11:58.171: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:12:00.393: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:12:02.201: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:12:04.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:12:06.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:12:08.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:12:10.176: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:12:12.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:12:14.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:12:16.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:12:18.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 25 11:12:20.175: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 25 11:12:20.181: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 25 11:12:24.216: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.92:8080/dial?request=hostname&protocol=udp&host=10.244.2.237&port=8081&tries=1'] Namespace:pod-network-test-8300 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 11:12:24.216: INFO: >>> kubeConfig: /root/.kube/config
I0725 11:12:24.251774       7 log.go:172] (0xc002720210) (0xc001aba780) Create stream
I0725 11:12:24.251805       7 log.go:172] (0xc002720210) (0xc001aba780) Stream added, broadcasting: 1
I0725 11:12:24.253896       7 log.go:172] (0xc002720210) Reply frame received for 1
I0725 11:12:24.253929       7 log.go:172] (0xc002720210) (0xc001aba820) Create stream
I0725 11:12:24.253949       7 log.go:172] (0xc002720210) (0xc001aba820) Stream added, broadcasting: 3
I0725 11:12:24.254963       7 log.go:172] (0xc002720210) Reply frame received for 3
I0725 11:12:24.255010       7 log.go:172] (0xc002720210) (0xc0013de460) Create stream
I0725 11:12:24.255027       7 log.go:172] (0xc002720210) (0xc0013de460) Stream added, broadcasting: 5
I0725 11:12:24.256260       7 log.go:172] (0xc002720210) Reply frame received for 5
I0725 11:12:24.338704       7 log.go:172] (0xc002720210) Data frame received for 3
I0725 11:12:24.338738       7 log.go:172] (0xc001aba820) (3) Data frame handling
I0725 11:12:24.338759       7 log.go:172] (0xc001aba820) (3) Data frame sent
I0725 11:12:24.339208       7 log.go:172] (0xc002720210) Data frame received for 5
I0725 11:12:24.339239       7 log.go:172] (0xc0013de460) (5) Data frame handling
I0725 11:12:24.339452       7 log.go:172] (0xc002720210) Data frame received for 3
I0725 11:12:24.339477       7 log.go:172] (0xc001aba820) (3) Data frame handling
I0725 11:12:24.341260       7 log.go:172] (0xc002720210) Data frame received for 1
I0725 11:12:24.341326       7 log.go:172] (0xc001aba780) (1) Data frame handling
I0725 11:12:24.341356       7 log.go:172] (0xc001aba780) (1) Data frame sent
I0725 11:12:24.341378       7 log.go:172] (0xc002720210) (0xc001aba780) Stream removed, broadcasting: 1
I0725 11:12:24.341396       7 log.go:172] (0xc002720210) Go away received
I0725 11:12:24.341576       7 log.go:172] (0xc002720210) (0xc001aba780) Stream removed, broadcasting: 1
I0725 11:12:24.341616       7 log.go:172] (0xc002720210) (0xc001aba820) Stream removed, broadcasting: 3
I0725 11:12:24.341642       7 log.go:172] (0xc002720210) (0xc0013de460) Stream removed, broadcasting: 5
Jul 25 11:12:24.341: INFO: Waiting for responses: map[]
Jul 25 11:12:24.345: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.92:8080/dial?request=hostname&protocol=udp&host=10.244.1.91&port=8081&tries=1'] Namespace:pod-network-test-8300 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 25 11:12:24.345: INFO: >>> kubeConfig: /root/.kube/config
I0725 11:12:24.378214       7 log.go:172] (0xc0021b6420) (0xc00124cfa0) Create stream
I0725 11:12:24.378246       7 log.go:172] (0xc0021b6420) (0xc00124cfa0) Stream added, broadcasting: 1
I0725 11:12:24.379942       7 log.go:172] (0xc0021b6420) Reply frame received for 1
I0725 11:12:24.379969       7 log.go:172] (0xc0021b6420) (0xc0013de640) Create stream
I0725 11:12:24.379979       7 log.go:172] (0xc0021b6420) (0xc0013de640) Stream added, broadcasting: 3
I0725 11:12:24.381043       7 log.go:172] (0xc0021b6420) Reply frame received for 3
I0725 11:12:24.381085       7 log.go:172] (0xc0021b6420) (0xc001aba8c0) Create stream
I0725 11:12:24.381102       7 log.go:172] (0xc0021b6420) (0xc001aba8c0) Stream added, broadcasting: 5
I0725 11:12:24.381961       7 log.go:172] (0xc0021b6420) Reply frame received for 5
I0725 11:12:24.450875       7 log.go:172] (0xc0021b6420) Data frame received for 3
I0725 11:12:24.450905       7 log.go:172] (0xc0013de640) (3) Data frame handling
I0725 11:12:24.450934       7 log.go:172] (0xc0013de640) (3) Data frame sent
I0725 11:12:24.451535       7 log.go:172] (0xc0021b6420) Data frame received for 3
I0725 11:12:24.451579       7 log.go:172] (0xc0013de640) (3) Data frame handling
I0725 11:12:24.451943       7 log.go:172] (0xc0021b6420) Data frame received for 5
I0725 11:12:24.451969       7 log.go:172] (0xc001aba8c0) (5) Data frame handling
I0725 11:12:24.454151       7 log.go:172] (0xc0021b6420) Data frame received for 1
I0725 11:12:24.454187       7 log.go:172] (0xc00124cfa0) (1) Data frame handling
I0725 11:12:24.454207       7 log.go:172] (0xc00124cfa0) (1) Data frame sent
I0725 11:12:24.454226       7 log.go:172] (0xc0021b6420) (0xc00124cfa0) Stream removed, broadcasting: 1
I0725 11:12:24.454253       7 log.go:172] (0xc0021b6420) Go away received
I0725 11:12:24.454536       7 log.go:172] (0xc0021b6420) (0xc00124cfa0) Stream removed, broadcasting: 1
I0725 11:12:24.454559       7 log.go:172] (0xc0021b6420) (0xc0013de640) Stream removed, broadcasting: 3
I0725 11:12:24.454570       7 log.go:172] (0xc0021b6420) (0xc001aba8c0) Stream removed, broadcasting: 5
Jul 25 11:12:24.454: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:12:24.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8300" for this suite.

• [SLOW TEST:26.454 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2476,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:12:24.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-4a7c3ce5-2674-4efd-858c-41fd9fec1602
STEP: Creating a pod to test consume configMaps
Jul 25 11:12:24.536: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93141f6b-492a-4f85-8cd0-5616871d46f8" in namespace "projected-444" to be "Succeeded or Failed"
Jul 25 11:12:24.542: INFO: Pod "pod-projected-configmaps-93141f6b-492a-4f85-8cd0-5616871d46f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083453ms
Jul 25 11:12:26.621: INFO: Pod "pod-projected-configmaps-93141f6b-492a-4f85-8cd0-5616871d46f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084730724s
Jul 25 11:12:28.625: INFO: Pod "pod-projected-configmaps-93141f6b-492a-4f85-8cd0-5616871d46f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089115831s
STEP: Saw pod success
Jul 25 11:12:28.625: INFO: Pod "pod-projected-configmaps-93141f6b-492a-4f85-8cd0-5616871d46f8" satisfied condition "Succeeded or Failed"
Jul 25 11:12:28.628: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-93141f6b-492a-4f85-8cd0-5616871d46f8 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 25 11:12:28.688: INFO: Waiting for pod pod-projected-configmaps-93141f6b-492a-4f85-8cd0-5616871d46f8 to disappear
Jul 25 11:12:28.692: INFO: Pod pod-projected-configmaps-93141f6b-492a-4f85-8cd0-5616871d46f8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:12:28.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-444" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2488,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:12:28.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-63e08026-3a59-4f09-8647-a1c0060e4bd9
STEP: Creating a pod to test consume configMaps
Jul 25 11:12:29.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e" in namespace "configmap-7219" to be "Succeeded or Failed"
Jul 25 11:12:29.119: INFO: Pod "pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e": Phase="Pending", Reason="", readiness=false. Elapsed: 42.81726ms
Jul 25 11:12:31.122: INFO: Pod "pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046123242s
Jul 25 11:12:33.126: INFO: Pod "pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e": Phase="Running", Reason="", readiness=true. Elapsed: 4.049991007s
Jul 25 11:12:35.130: INFO: Pod "pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053859063s
STEP: Saw pod success
Jul 25 11:12:35.130: INFO: Pod "pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e" satisfied condition "Succeeded or Failed"
Jul 25 11:12:35.132: INFO: Trying to get logs from node kali-worker pod pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e container configmap-volume-test: 
STEP: delete the pod
Jul 25 11:12:35.196: INFO: Waiting for pod pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e to disappear
Jul 25 11:12:35.201: INFO: Pod pod-configmaps-24cb92d4-1fd2-4dc3-bfa0-c6c8ef34625e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:12:35.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7219" for this suite.

• [SLOW TEST:6.511 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2506,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:12:35.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:12:39.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5888" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2507,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:12:39.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:12:39.528: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1a731dc2-3d7e-40f1-95c7-a69edba846fc" in namespace "security-context-test-1375" to be "Succeeded or Failed"
Jul 25 11:12:39.572: INFO: Pod "busybox-privileged-false-1a731dc2-3d7e-40f1-95c7-a69edba846fc": Phase="Pending", Reason="", readiness=false. Elapsed: 44.581394ms
Jul 25 11:12:41.723: INFO: Pod "busybox-privileged-false-1a731dc2-3d7e-40f1-95c7-a69edba846fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194741464s
Jul 25 11:12:43.726: INFO: Pod "busybox-privileged-false-1a731dc2-3d7e-40f1-95c7-a69edba846fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198413068s
Jul 25 11:12:43.726: INFO: Pod "busybox-privileged-false-1a731dc2-3d7e-40f1-95c7-a69edba846fc" satisfied condition "Succeeded or Failed"
Jul 25 11:12:43.733: INFO: Got logs for pod "busybox-privileged-false-1a731dc2-3d7e-40f1-95c7-a69edba846fc": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:12:43.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1375" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2518,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:12:43.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0725 11:13:24.410196       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 25 11:13:24.410: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:13:24.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2838" for this suite.

• [SLOW TEST:40.694 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":140,"skipped":2530,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:13:24.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Jul 25 11:13:24.499: INFO: namespace kubectl-704
Jul 25 11:13:24.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-704'
Jul 25 11:13:24.830: INFO: stderr: ""
Jul 25 11:13:24.830: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 25 11:13:25.833: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:13:25.833: INFO: Found 0 / 1
Jul 25 11:13:26.834: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:13:26.834: INFO: Found 0 / 1
Jul 25 11:13:27.833: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:13:27.834: INFO: Found 0 / 1
Jul 25 11:13:28.835: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:13:28.835: INFO: Found 1 / 1
Jul 25 11:13:28.835: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 25 11:13:28.838: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:13:28.838: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 25 11:13:28.838: INFO: wait on agnhost-master startup in kubectl-704 
Jul 25 11:13:28.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs agnhost-master-bdg8d agnhost-master --namespace=kubectl-704'
Jul 25 11:13:28.948: INFO: stderr: ""
Jul 25 11:13:28.948: INFO: stdout: "Paused\n"
STEP: exposing RC
Jul 25 11:13:28.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-704'
Jul 25 11:13:29.101: INFO: stderr: ""
Jul 25 11:13:29.101: INFO: stdout: "service/rm2 exposed\n"
Jul 25 11:13:29.151: INFO: Service rm2 in namespace kubectl-704 found.
STEP: exposing service
Jul 25 11:13:31.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-704'
Jul 25 11:13:32.138: INFO: stderr: ""
Jul 25 11:13:32.138: INFO: stdout: "service/rm3 exposed\n"
Jul 25 11:13:32.438: INFO: Service rm3 in namespace kubectl-704 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:13:34.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-704" for this suite.

• [SLOW TEST:10.032 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":141,"skipped":2535,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:13:34.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-b10e022e-02cb-40e2-b913-f786db4e5b18
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-b10e022e-02cb-40e2-b913-f786db4e5b18
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:13:41.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9830" for this suite.

• [SLOW TEST:7.436 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2555,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:13:41.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:13:42.607: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jul 25 11:13:44.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272422, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272422, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272422, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272422, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:13:46.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272422, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272422, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272422, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272422, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:13:49.650: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:13:49.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:13:50.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8276" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:9.078 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":143,"skipped":2570,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:13:50.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:13:51.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1504" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":144,"skipped":2580,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:13:51.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:13:51.612: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:13:52.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-546" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":145,"skipped":2617,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:13:52.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-8896d78a-f0c0-47c8-a78d-d9208e892275
STEP: Creating a pod to test consume secrets
Jul 25 11:13:52.566: INFO: Waiting up to 5m0s for pod "pod-secrets-7bbde382-77db-4900-9e69-0d598d060a52" in namespace "secrets-7412" to be "Succeeded or Failed"
Jul 25 11:13:52.606: INFO: Pod "pod-secrets-7bbde382-77db-4900-9e69-0d598d060a52": Phase="Pending", Reason="", readiness=false. Elapsed: 40.531488ms
Jul 25 11:13:54.657: INFO: Pod "pod-secrets-7bbde382-77db-4900-9e69-0d598d060a52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090970264s
Jul 25 11:13:56.660: INFO: Pod "pod-secrets-7bbde382-77db-4900-9e69-0d598d060a52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094148408s
STEP: Saw pod success
Jul 25 11:13:56.660: INFO: Pod "pod-secrets-7bbde382-77db-4900-9e69-0d598d060a52" satisfied condition "Succeeded or Failed"
Jul 25 11:13:56.662: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-7bbde382-77db-4900-9e69-0d598d060a52 container secret-volume-test: 
STEP: delete the pod
Jul 25 11:13:56.952: INFO: Waiting for pod pod-secrets-7bbde382-77db-4900-9e69-0d598d060a52 to disappear
Jul 25 11:13:56.959: INFO: Pod pod-secrets-7bbde382-77db-4900-9e69-0d598d060a52 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:13:56.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7412" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2627,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:13:56.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 11:14:05.224: INFO: DNS probes using dns-7/dns-test-2d564e21-c29c-4e40-9efc-e49bfa494499 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:05.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7" for this suite.

• [SLOW TEST:8.448 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":147,"skipped":2639,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:05.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:14:06.999: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:14:09.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272447, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272447, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272447, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272446, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:14:12.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:14:12.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9920-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:13.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8556" for this suite.
STEP: Destroying namespace "webhook-8556-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.972 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":148,"skipped":2672,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:13.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:17.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5128" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2698,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:17.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-78cb693c-fbb9-4882-b8f2-3a7c14cb0b9c
STEP: Creating a pod to test consume secrets
Jul 25 11:14:17.657: INFO: Waiting up to 5m0s for pod "pod-secrets-4deabd2f-9e44-4f9e-b929-efb6fc4e3068" in namespace "secrets-4222" to be "Succeeded or Failed"
Jul 25 11:14:17.685: INFO: Pod "pod-secrets-4deabd2f-9e44-4f9e-b929-efb6fc4e3068": Phase="Pending", Reason="", readiness=false. Elapsed: 27.697836ms
Jul 25 11:14:19.736: INFO: Pod "pod-secrets-4deabd2f-9e44-4f9e-b929-efb6fc4e3068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078971532s
Jul 25 11:14:21.740: INFO: Pod "pod-secrets-4deabd2f-9e44-4f9e-b929-efb6fc4e3068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082869959s
STEP: Saw pod success
Jul 25 11:14:21.740: INFO: Pod "pod-secrets-4deabd2f-9e44-4f9e-b929-efb6fc4e3068" satisfied condition "Succeeded or Failed"
Jul 25 11:14:21.743: INFO: Trying to get logs from node kali-worker pod pod-secrets-4deabd2f-9e44-4f9e-b929-efb6fc4e3068 container secret-volume-test: 
STEP: delete the pod
Jul 25 11:14:21.981: INFO: Waiting for pod pod-secrets-4deabd2f-9e44-4f9e-b929-efb6fc4e3068 to disappear
Jul 25 11:14:22.028: INFO: Pod pod-secrets-4deabd2f-9e44-4f9e-b929-efb6fc4e3068 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:22.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4222" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2700,"failed":0}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:22.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul 25 11:14:29.678: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:30.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6706" for this suite.

• [SLOW TEST:8.696 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":151,"skipped":2703,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:30.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:14:32.664: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:14:34.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272472, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272472, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272472, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272472, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:14:36.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272472, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272472, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272472, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272472, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:14:39.766: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:14:39.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2580-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:40.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7762" for this suite.
STEP: Destroying namespace "webhook-7762-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.376 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":152,"skipped":2718,"failed":0}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:41.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 25 11:14:41.262: INFO: Waiting up to 5m0s for pod "downward-api-b90dcaf8-1eb3-41f4-847a-e5a7523779d3" in namespace "downward-api-3122" to be "Succeeded or Failed"
Jul 25 11:14:41.448: INFO: Pod "downward-api-b90dcaf8-1eb3-41f4-847a-e5a7523779d3": Phase="Pending", Reason="", readiness=false. Elapsed: 186.179789ms
Jul 25 11:14:43.653: INFO: Pod "downward-api-b90dcaf8-1eb3-41f4-847a-e5a7523779d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39133993s
Jul 25 11:14:45.699: INFO: Pod "downward-api-b90dcaf8-1eb3-41f4-847a-e5a7523779d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437328176s
STEP: Saw pod success
Jul 25 11:14:45.699: INFO: Pod "downward-api-b90dcaf8-1eb3-41f4-847a-e5a7523779d3" satisfied condition "Succeeded or Failed"
Jul 25 11:14:45.702: INFO: Trying to get logs from node kali-worker2 pod downward-api-b90dcaf8-1eb3-41f4-847a-e5a7523779d3 container dapi-container: 
STEP: delete the pod
Jul 25 11:14:45.726: INFO: Waiting for pod downward-api-b90dcaf8-1eb3-41f4-847a-e5a7523779d3 to disappear
Jul 25 11:14:45.737: INFO: Pod downward-api-b90dcaf8-1eb3-41f4-847a-e5a7523779d3 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:45.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3122" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2724,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:45.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 25 11:14:45.887: INFO: Waiting up to 5m0s for pod "pod-54fd964f-9324-4ba0-83b0-3a8d7e950679" in namespace "emptydir-2869" to be "Succeeded or Failed"
Jul 25 11:14:45.891: INFO: Pod "pod-54fd964f-9324-4ba0-83b0-3a8d7e950679": Phase="Pending", Reason="", readiness=false. Elapsed: 3.780751ms
Jul 25 11:14:47.938: INFO: Pod "pod-54fd964f-9324-4ba0-83b0-3a8d7e950679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051541671s
Jul 25 11:14:49.963: INFO: Pod "pod-54fd964f-9324-4ba0-83b0-3a8d7e950679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075750946s
STEP: Saw pod success
Jul 25 11:14:49.963: INFO: Pod "pod-54fd964f-9324-4ba0-83b0-3a8d7e950679" satisfied condition "Succeeded or Failed"
Jul 25 11:14:49.966: INFO: Trying to get logs from node kali-worker2 pod pod-54fd964f-9324-4ba0-83b0-3a8d7e950679 container test-container: 
STEP: delete the pod
Jul 25 11:14:50.131: INFO: Waiting for pod pod-54fd964f-9324-4ba0-83b0-3a8d7e950679 to disappear
Jul 25 11:14:50.135: INFO: Pod pod-54fd964f-9324-4ba0-83b0-3a8d7e950679 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:50.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2869" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2729,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:50.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-ff9b8349-9e4a-4092-8d85-c2f6e665c56d
STEP: Creating a pod to test consume configMaps
Jul 25 11:14:50.263: INFO: Waiting up to 5m0s for pod "pod-configmaps-973f2a84-72b5-4ff3-bcf9-b00bb1e06e08" in namespace "configmap-607" to be "Succeeded or Failed"
Jul 25 11:14:50.274: INFO: Pod "pod-configmaps-973f2a84-72b5-4ff3-bcf9-b00bb1e06e08": Phase="Pending", Reason="", readiness=false. Elapsed: 10.954619ms
Jul 25 11:14:52.278: INFO: Pod "pod-configmaps-973f2a84-72b5-4ff3-bcf9-b00bb1e06e08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015037042s
Jul 25 11:14:54.282: INFO: Pod "pod-configmaps-973f2a84-72b5-4ff3-bcf9-b00bb1e06e08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019326694s
STEP: Saw pod success
Jul 25 11:14:54.282: INFO: Pod "pod-configmaps-973f2a84-72b5-4ff3-bcf9-b00bb1e06e08" satisfied condition "Succeeded or Failed"
Jul 25 11:14:54.285: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-973f2a84-72b5-4ff3-bcf9-b00bb1e06e08 container configmap-volume-test: 
STEP: delete the pod
Jul 25 11:14:54.305: INFO: Waiting for pod pod-configmaps-973f2a84-72b5-4ff3-bcf9-b00bb1e06e08 to disappear
Jul 25 11:14:54.376: INFO: Pod pod-configmaps-973f2a84-72b5-4ff3-bcf9-b00bb1e06e08 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:54.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-607" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2731,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:54.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:14:54.461: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-898f8f17-7f0d-4f4e-a0d8-2b334d134703" in namespace "security-context-test-7152" to be "Succeeded or Failed"
Jul 25 11:14:54.466: INFO: Pod "alpine-nnp-false-898f8f17-7f0d-4f4e-a0d8-2b334d134703": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397838ms
Jul 25 11:14:56.645: INFO: Pod "alpine-nnp-false-898f8f17-7f0d-4f4e-a0d8-2b334d134703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183962993s
Jul 25 11:14:58.650: INFO: Pod "alpine-nnp-false-898f8f17-7f0d-4f4e-a0d8-2b334d134703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.188467939s
Jul 25 11:14:58.650: INFO: Pod "alpine-nnp-false-898f8f17-7f0d-4f4e-a0d8-2b334d134703" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:14:58.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7152" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2743,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:14:58.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-4264/configmap-test-c2113e1d-f0b0-4ec8-9a13-c7197e24f80e
STEP: Creating a pod to test consume configMaps
Jul 25 11:14:59.120: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a" in namespace "configmap-4264" to be "Succeeded or Failed"
Jul 25 11:14:59.160: INFO: Pod "pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 39.291361ms
Jul 25 11:15:01.164: INFO: Pod "pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043622866s
Jul 25 11:15:03.244: INFO: Pod "pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a": Phase="Running", Reason="", readiness=true. Elapsed: 4.123633276s
Jul 25 11:15:05.248: INFO: Pod "pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127650619s
STEP: Saw pod success
Jul 25 11:15:05.248: INFO: Pod "pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a" satisfied condition "Succeeded or Failed"
Jul 25 11:15:05.251: INFO: Trying to get logs from node kali-worker pod pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a container env-test: 
STEP: delete the pod
Jul 25 11:15:05.269: INFO: Waiting for pod pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a to disappear
Jul 25 11:15:05.284: INFO: Pod pod-configmaps-b1de3a7c-e095-4cee-af0a-39c409173f1a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:15:05.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4264" for this suite.

• [SLOW TEST:6.626 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2755,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:15:05.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul 25 11:15:05.415: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-281 /api/v1/namespaces/watch-281/configmaps/e2e-watch-test-resource-version b119cec3-9c8a-4365-aef1-e3a0884997dd 4031111 0 2020-07-25 11:15:05 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-07-25 11:15:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 11:15:05.415: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-281 /api/v1/namespaces/watch-281/configmaps/e2e-watch-test-resource-version b119cec3-9c8a-4365-aef1-e3a0884997dd 4031112 0 2020-07-25 11:15:05 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-07-25 11:15:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:15:05.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-281" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":158,"skipped":2761,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:15:05.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:15:16.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2886" for this suite.

• [SLOW TEST:11.141 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":159,"skipped":2772,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:15:16.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:15:16.638: INFO: Creating ReplicaSet my-hostname-basic-b8c691ab-ee6e-48b8-8a87-2494cbcffa45
Jul 25 11:15:16.670: INFO: Pod name my-hostname-basic-b8c691ab-ee6e-48b8-8a87-2494cbcffa45: Found 0 pods out of 1
Jul 25 11:15:21.674: INFO: Pod name my-hostname-basic-b8c691ab-ee6e-48b8-8a87-2494cbcffa45: Found 1 pods out of 1
Jul 25 11:15:21.674: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b8c691ab-ee6e-48b8-8a87-2494cbcffa45" is running
Jul 25 11:15:21.676: INFO: Pod "my-hostname-basic-b8c691ab-ee6e-48b8-8a87-2494cbcffa45-npkp7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-25 11:15:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-25 11:15:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-25 11:15:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-25 11:15:16 +0000 UTC Reason: Message:}])
Jul 25 11:15:21.676: INFO: Trying to dial the pod
Jul 25 11:15:26.688: INFO: Controller my-hostname-basic-b8c691ab-ee6e-48b8-8a87-2494cbcffa45: Got expected result from replica 1 [my-hostname-basic-b8c691ab-ee6e-48b8-8a87-2494cbcffa45-npkp7]: "my-hostname-basic-b8c691ab-ee6e-48b8-8a87-2494cbcffa45-npkp7", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:15:26.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8942" for this suite.

• [SLOW TEST:10.134 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":160,"skipped":2783,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:15:26.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8742.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8742.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8742.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8742.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8742.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8742.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 11:15:32.870: INFO: DNS probes using dns-8742/dns-test-88bbb57b-f2a1-469b-8a54-187942dc47f2 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:15:32.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8742" for this suite.

• [SLOW TEST:6.243 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":161,"skipped":2796,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:15:32.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 25 11:15:43.960: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 25 11:15:43.987: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 25 11:15:45.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 25 11:15:45.992: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 25 11:15:47.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 25 11:15:47.991: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 25 11:15:49.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 25 11:15:49.992: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 25 11:15:51.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 25 11:15:51.992: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 25 11:15:53.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 25 11:15:53.991: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:15:53.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4496" for this suite.

• [SLOW TEST:21.059 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2845,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:15:54.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:15:54.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 25 11:15:57.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4242 create -f -'
Jul 25 11:16:01.132: INFO: stderr: ""
Jul 25 11:16:01.132: INFO: stdout: "e2e-test-crd-publish-openapi-1712-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul 25 11:16:01.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4242 delete e2e-test-crd-publish-openapi-1712-crds test-cr'
Jul 25 11:16:01.263: INFO: stderr: ""
Jul 25 11:16:01.263: INFO: stdout: "e2e-test-crd-publish-openapi-1712-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jul 25 11:16:01.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4242 apply -f -'
Jul 25 11:16:01.490: INFO: stderr: ""
Jul 25 11:16:01.490: INFO: stdout: "e2e-test-crd-publish-openapi-1712-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul 25 11:16:01.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4242 delete e2e-test-crd-publish-openapi-1712-crds test-cr'
Jul 25 11:16:01.586: INFO: stderr: ""
Jul 25 11:16:01.586: INFO: stdout: "e2e-test-crd-publish-openapi-1712-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 25 11:16:01.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1712-crds'
Jul 25 11:16:01.819: INFO: stderr: ""
Jul 25 11:16:01.819: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1712-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:16:04.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4242" for this suite.

• [SLOW TEST:10.737 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":163,"skipped":2873,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:16:04.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:16:05.839: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:16:07.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272565, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272565, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272565, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272565, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:16:10.881: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:16:11.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2436" for this suite.
STEP: Destroying namespace "webhook-2436-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.564 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":164,"skipped":2884,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:16:11.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul 25 11:16:17.551: INFO: &Pod{ObjectMeta:{send-events-b697edf6-b515-4bbc-906b-a4eb4bb88bb0  events-8814 /api/v1/namespaces/events-8814/pods/send-events-b697edf6-b515-4bbc-906b-a4eb4bb88bb0 674016f7-e947-4ca5-a6e1-c69ada473361 4031548 0 2020-07-25 11:16:11 +0000 UTC   map[name:foo time:434259951] map[] [] []  [{e2e.test Update v1 2020-07-25 11:16:11 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 11:16:15 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nn7f7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nn7f7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nn7f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:16:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:16:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:16:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:16:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.3,StartTime:2020-07-25 11:16:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 11:16:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://7e06de3187c73e7281df9c1cf86144ede896229639d1b2548a4aca3870dfe10a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jul 25 11:16:19.556: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul 25 11:16:21.560: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:16:21.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8814" for this suite.

• [SLOW TEST:10.287 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":165,"skipped":2945,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:16:21.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-d64c80be-b528-4590-95dd-2abe3c2746b0 in namespace container-probe-111
Jul 25 11:16:27.694: INFO: Started pod busybox-d64c80be-b528-4590-95dd-2abe3c2746b0 in namespace container-probe-111
STEP: checking the pod's current state and verifying that restartCount is present
Jul 25 11:16:27.698: INFO: Initial restart count of pod busybox-d64c80be-b528-4590-95dd-2abe3c2746b0 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:20:28.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-111" for this suite.

• [SLOW TEST:247.286 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2991,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:20:28.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:20:28.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8771" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":167,"skipped":2992,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:20:28.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:20:29.735: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:20:31.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272829, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272829, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272829, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272829, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:20:33.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272829, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272829, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272829, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272829, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:20:36.776: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:20:46.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1786" for this suite.
STEP: Destroying namespace "webhook-1786-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.117 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":168,"skipped":2996,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:20:47.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:21:03.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9806" for this suite.

• [SLOW TEST:16.349 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":169,"skipped":3058,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:21:03.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul 25 11:21:03.559: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:21:09.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8713" for this suite.

• [SLOW TEST:5.981 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":170,"skipped":3060,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:21:09.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:21:10.041: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:21:12.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272870, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272870, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272870, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731272869, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:21:15.131: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:21:15.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9122" for this suite.
STEP: Destroying namespace "webhook-9122-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.423 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":171,"skipped":3066,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:21:15.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 25 11:21:19.941: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:21:19.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5377" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":3075,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:21:20.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9281
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jul 25 11:21:20.178: INFO: Found 0 stateful pods, waiting for 3
Jul 25 11:21:30.183: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:21:30.183: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:21:30.183: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 25 11:21:40.183: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:21:40.183: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:21:40.183: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:21:40.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9281 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 25 11:21:40.446: INFO: stderr: "I0725 11:21:40.323264    3141 log.go:172] (0xc00070cbb0) (0xc000809680) Create stream\nI0725 11:21:40.323343    3141 log.go:172] (0xc00070cbb0) (0xc000809680) Stream added, broadcasting: 1\nI0725 11:21:40.326511    3141 log.go:172] (0xc00070cbb0) Reply frame received for 1\nI0725 11:21:40.326558    3141 log.go:172] (0xc00070cbb0) (0xc0006bd5e0) Create stream\nI0725 11:21:40.326573    3141 log.go:172] (0xc00070cbb0) (0xc0006bd5e0) Stream added, broadcasting: 3\nI0725 11:21:40.327557    3141 log.go:172] (0xc00070cbb0) Reply frame received for 3\nI0725 11:21:40.327581    3141 log.go:172] (0xc00070cbb0) (0xc000809720) Create stream\nI0725 11:21:40.327589    3141 log.go:172] (0xc00070cbb0) (0xc000809720) Stream added, broadcasting: 5\nI0725 11:21:40.328596    3141 log.go:172] (0xc00070cbb0) Reply frame received for 5\nI0725 11:21:40.406761    3141 log.go:172] (0xc00070cbb0) Data frame received for 5\nI0725 11:21:40.406791    3141 log.go:172] (0xc000809720) (5) Data frame handling\nI0725 11:21:40.406810    3141 log.go:172] (0xc000809720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 11:21:40.437588    3141 log.go:172] (0xc00070cbb0) Data frame received for 3\nI0725 11:21:40.437622    3141 log.go:172] (0xc0006bd5e0) (3) Data frame handling\nI0725 11:21:40.437656    3141 log.go:172] (0xc0006bd5e0) (3) Data frame sent\nI0725 11:21:40.437906    3141 log.go:172] (0xc00070cbb0) Data frame received for 5\nI0725 11:21:40.437923    3141 log.go:172] (0xc000809720) (5) Data frame handling\nI0725 11:21:40.437962    3141 log.go:172] (0xc00070cbb0) Data frame received for 3\nI0725 11:21:40.438003    3141 log.go:172] (0xc0006bd5e0) (3) Data frame handling\nI0725 11:21:40.440325    3141 log.go:172] (0xc00070cbb0) Data frame received for 1\nI0725 11:21:40.440350    3141 log.go:172] (0xc000809680) (1) Data frame handling\nI0725 11:21:40.440363    3141 log.go:172] (0xc000809680) (1) Data frame sent\nI0725 11:21:40.440380    3141 log.go:172] (0xc00070cbb0) (0xc000809680) Stream removed, broadcasting: 1\nI0725 11:21:40.440399    3141 log.go:172] (0xc00070cbb0) Go away received\nI0725 11:21:40.441063    3141 log.go:172] (0xc00070cbb0) (0xc000809680) Stream removed, broadcasting: 1\nI0725 11:21:40.441094    3141 log.go:172] (0xc00070cbb0) (0xc0006bd5e0) Stream removed, broadcasting: 3\nI0725 11:21:40.441114    3141 log.go:172] (0xc00070cbb0) (0xc000809720) Stream removed, broadcasting: 5\n"
Jul 25 11:21:40.447: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 25 11:21:40.447: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul 25 11:21:50.480: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul 25 11:22:00.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9281 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:22:00.782: INFO: stderr: "I0725 11:22:00.697962    3161 log.go:172] (0xc0004ecbb0) (0xc0006915e0) Create stream\nI0725 11:22:00.698026    3161 log.go:172] (0xc0004ecbb0) (0xc0006915e0) Stream added, broadcasting: 1\nI0725 11:22:00.700817    3161 log.go:172] (0xc0004ecbb0) Reply frame received for 1\nI0725 11:22:00.700860    3161 log.go:172] (0xc0004ecbb0) (0xc0009e4000) Create stream\nI0725 11:22:00.700871    3161 log.go:172] (0xc0004ecbb0) (0xc0009e4000) Stream added, broadcasting: 3\nI0725 11:22:00.701838    3161 log.go:172] (0xc0004ecbb0) Reply frame received for 3\nI0725 11:22:00.701883    3161 log.go:172] (0xc0004ecbb0) (0xc0008d6000) Create stream\nI0725 11:22:00.701896    3161 log.go:172] (0xc0004ecbb0) (0xc0008d6000) Stream added, broadcasting: 5\nI0725 11:22:00.702766    3161 log.go:172] (0xc0004ecbb0) Reply frame received for 5\nI0725 11:22:00.773072    3161 log.go:172] (0xc0004ecbb0) Data frame received for 5\nI0725 11:22:00.773108    3161 log.go:172] (0xc0008d6000) (5) Data frame handling\nI0725 11:22:00.773125    3161 log.go:172] (0xc0008d6000) (5) Data frame sent\nI0725 11:22:00.773138    3161 log.go:172] (0xc0004ecbb0) Data frame received for 5\nI0725 11:22:00.773147    3161 log.go:172] (0xc0008d6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0725 11:22:00.773172    3161 log.go:172] (0xc0004ecbb0) Data frame received for 3\nI0725 11:22:00.773184    3161 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0725 11:22:00.773196    3161 log.go:172] (0xc0009e4000) (3) Data frame sent\nI0725 11:22:00.773333    3161 log.go:172] (0xc0004ecbb0) Data frame received for 3\nI0725 11:22:00.773352    3161 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0725 11:22:00.775290    3161 log.go:172] (0xc0004ecbb0) Data frame received for 1\nI0725 11:22:00.775319    3161 log.go:172] (0xc0006915e0) (1) Data frame handling\nI0725 11:22:00.775335    3161 log.go:172] (0xc0006915e0) (1) Data frame sent\nI0725 11:22:00.775483    3161 log.go:172] (0xc0004ecbb0) (0xc0006915e0) Stream removed, broadcasting: 1\nI0725 11:22:00.775638    3161 log.go:172] (0xc0004ecbb0) Go away received\nI0725 11:22:00.775871    3161 log.go:172] (0xc0004ecbb0) (0xc0006915e0) Stream removed, broadcasting: 1\nI0725 11:22:00.775895    3161 log.go:172] (0xc0004ecbb0) (0xc0009e4000) Stream removed, broadcasting: 3\nI0725 11:22:00.775912    3161 log.go:172] (0xc0004ecbb0) (0xc0008d6000) Stream removed, broadcasting: 5\n"
Jul 25 11:22:00.782: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 25 11:22:00.782: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

STEP: Rolling back to a previous revision
Jul 25 11:22:20.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9281 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 25 11:22:21.075: INFO: stderr: "I0725 11:22:20.938758    3184 log.go:172] (0xc000a6a160) (0xc000a4a0a0) Create stream\nI0725 11:22:20.938814    3184 log.go:172] (0xc000a6a160) (0xc000a4a0a0) Stream added, broadcasting: 1\nI0725 11:22:20.941569    3184 log.go:172] (0xc000a6a160) Reply frame received for 1\nI0725 11:22:20.941635    3184 log.go:172] (0xc000a6a160) (0xc000681360) Create stream\nI0725 11:22:20.941655    3184 log.go:172] (0xc000a6a160) (0xc000681360) Stream added, broadcasting: 3\nI0725 11:22:20.942786    3184 log.go:172] (0xc000a6a160) Reply frame received for 3\nI0725 11:22:20.942825    3184 log.go:172] (0xc000a6a160) (0xc0002c4000) Create stream\nI0725 11:22:20.942839    3184 log.go:172] (0xc000a6a160) (0xc0002c4000) Stream added, broadcasting: 5\nI0725 11:22:20.943833    3184 log.go:172] (0xc000a6a160) Reply frame received for 5\nI0725 11:22:21.038457    3184 log.go:172] (0xc000a6a160) Data frame received for 5\nI0725 11:22:21.038478    3184 log.go:172] (0xc0002c4000) (5) Data frame handling\nI0725 11:22:21.038490    3184 log.go:172] (0xc0002c4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0725 11:22:21.067737    3184 log.go:172] (0xc000a6a160) Data frame received for 5\nI0725 11:22:21.067836    3184 log.go:172] (0xc0002c4000) (5) Data frame handling\nI0725 11:22:21.067879    3184 log.go:172] (0xc000a6a160) Data frame received for 3\nI0725 11:22:21.067896    3184 log.go:172] (0xc000681360) (3) Data frame handling\nI0725 11:22:21.067924    3184 log.go:172] (0xc000681360) (3) Data frame sent\nI0725 11:22:21.068002    3184 log.go:172] (0xc000a6a160) Data frame received for 3\nI0725 11:22:21.068020    3184 log.go:172] (0xc000681360) (3) Data frame handling\nI0725 11:22:21.070069    3184 log.go:172] (0xc000a6a160) Data frame received for 1\nI0725 11:22:21.070110    3184 log.go:172] (0xc000a4a0a0) (1) Data frame handling\nI0725 11:22:21.070149    3184 log.go:172] (0xc000a4a0a0) (1) Data frame sent\nI0725 11:22:21.070184    3184 log.go:172] (0xc000a6a160) (0xc000a4a0a0) Stream removed, broadcasting: 1\nI0725 11:22:21.070240    3184 log.go:172] (0xc000a6a160) Go away received\nI0725 11:22:21.070659    3184 log.go:172] (0xc000a6a160) (0xc000a4a0a0) Stream removed, broadcasting: 1\nI0725 11:22:21.070683    3184 log.go:172] (0xc000a6a160) (0xc000681360) Stream removed, broadcasting: 3\nI0725 11:22:21.070695    3184 log.go:172] (0xc000a6a160) (0xc0002c4000) Stream removed, broadcasting: 5\n"
Jul 25 11:22:21.075: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 25 11:22:21.075: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 25 11:22:31.110: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul 25 11:22:41.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9281 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 25 11:22:41.414: INFO: stderr: "I0725 11:22:41.288234    3206 log.go:172] (0xc0003efb80) (0xc0009a20a0) Create stream\nI0725 11:22:41.288326    3206 log.go:172] (0xc0003efb80) (0xc0009a20a0) Stream added, broadcasting: 1\nI0725 11:22:41.291210    3206 log.go:172] (0xc0003efb80) Reply frame received for 1\nI0725 11:22:41.291255    3206 log.go:172] (0xc0003efb80) (0xc0006872c0) Create stream\nI0725 11:22:41.291272    3206 log.go:172] (0xc0003efb80) (0xc0006872c0) Stream added, broadcasting: 3\nI0725 11:22:41.292273    3206 log.go:172] (0xc0003efb80) Reply frame received for 3\nI0725 11:22:41.292310    3206 log.go:172] (0xc0003efb80) (0xc0006874a0) Create stream\nI0725 11:22:41.292318    3206 log.go:172] (0xc0003efb80) (0xc0006874a0) Stream added, broadcasting: 5\nI0725 11:22:41.293364    3206 log.go:172] (0xc0003efb80) Reply frame received for 5\nI0725 11:22:41.405409    3206 log.go:172] (0xc0003efb80) Data frame received for 5\nI0725 11:22:41.405448    3206 log.go:172] (0xc0006874a0) (5) Data frame handling\nI0725 11:22:41.405478    3206 log.go:172] (0xc0006874a0) (5) Data frame sent\nI0725 11:22:41.405501    3206 log.go:172] (0xc0003efb80) Data frame received for 5\nI0725 11:22:41.405518    3206 log.go:172] (0xc0006874a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0725 11:22:41.405568    3206 log.go:172] (0xc0003efb80) Data frame received for 3\nI0725 11:22:41.405595    3206 log.go:172] (0xc0006872c0) (3) Data frame handling\nI0725 11:22:41.405671    3206 log.go:172] (0xc0006872c0) (3) Data frame sent\nI0725 11:22:41.405699    3206 log.go:172] (0xc0003efb80) Data frame received for 3\nI0725 11:22:41.405717    3206 log.go:172] (0xc0006872c0) (3) Data frame handling\nI0725 11:22:41.407223    3206 log.go:172] (0xc0003efb80) Data frame received for 1\nI0725 11:22:41.407246    3206 log.go:172] (0xc0009a20a0) (1) Data frame handling\nI0725 11:22:41.407260    3206 log.go:172] (0xc0009a20a0) (1) Data frame sent\nI0725 11:22:41.407270    3206 log.go:172] (0xc0003efb80) (0xc0009a20a0) Stream removed, broadcasting: 1\nI0725 11:22:41.407287    3206 log.go:172] (0xc0003efb80) Go away received\nI0725 11:22:41.407754    3206 log.go:172] (0xc0003efb80) (0xc0009a20a0) Stream removed, broadcasting: 1\nI0725 11:22:41.407780    3206 log.go:172] (0xc0003efb80) (0xc0006872c0) Stream removed, broadcasting: 3\nI0725 11:22:41.407793    3206 log.go:172] (0xc0003efb80) (0xc0006874a0) Stream removed, broadcasting: 5\n"
Jul 25 11:22:41.414: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 25 11:22:41.414: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 25 11:22:51.483: INFO: Waiting for StatefulSet statefulset-9281/ss2 to complete update
Jul 25 11:22:51.483: INFO: Waiting for Pod statefulset-9281/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 25 11:22:51.483: INFO: Waiting for Pod statefulset-9281/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 25 11:22:51.483: INFO: Waiting for Pod statefulset-9281/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 25 11:23:01.490: INFO: Waiting for StatefulSet statefulset-9281/ss2 to complete update
Jul 25 11:23:01.490: INFO: Waiting for Pod statefulset-9281/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 25 11:23:11.509: INFO: Waiting for StatefulSet statefulset-9281/ss2 to complete update
Jul 25 11:23:11.509: INFO: Waiting for Pod statefulset-9281/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 25 11:23:21.490: INFO: Deleting all statefulset in ns statefulset-9281
Jul 25 11:23:21.492: INFO: Scaling statefulset ss2 to 0
Jul 25 11:24:01.606: INFO: Waiting for statefulset status.replicas updated to 0
Jul 25 11:24:01.609: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:24:01.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9281" for this suite.

• [SLOW TEST:161.614 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":173,"skipped":3077,"failed":0}
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:24:01.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Jul 25 11:24:01.736: INFO: Waiting up to 5m0s for pod "client-containers-a8bbd5f2-ea5c-4b6c-ae0b-85c93f6817c4" in namespace "containers-4241" to be "Succeeded or Failed"
Jul 25 11:24:01.739: INFO: Pod "client-containers-a8bbd5f2-ea5c-4b6c-ae0b-85c93f6817c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.725122ms
Jul 25 11:24:03.743: INFO: Pod "client-containers-a8bbd5f2-ea5c-4b6c-ae0b-85c93f6817c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007032026s
Jul 25 11:24:05.747: INFO: Pod "client-containers-a8bbd5f2-ea5c-4b6c-ae0b-85c93f6817c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011069747s
STEP: Saw pod success
Jul 25 11:24:05.747: INFO: Pod "client-containers-a8bbd5f2-ea5c-4b6c-ae0b-85c93f6817c4" satisfied condition "Succeeded or Failed"
Jul 25 11:24:05.750: INFO: Trying to get logs from node kali-worker2 pod client-containers-a8bbd5f2-ea5c-4b6c-ae0b-85c93f6817c4 container test-container: 
STEP: delete the pod
Jul 25 11:24:05.797: INFO: Waiting for pod client-containers-a8bbd5f2-ea5c-4b6c-ae0b-85c93f6817c4 to disappear
Jul 25 11:24:05.811: INFO: Pod client-containers-a8bbd5f2-ea5c-4b6c-ae0b-85c93f6817c4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:24:05.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4241" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3080,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:24:05.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul 25 11:24:10.796: INFO: Successfully updated pod "labelsupdate846fe7f4-31f3-4dc4-9596-16496c542c5e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:24:14.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6873" for this suite.

• [SLOW TEST:9.039 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3097,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:24:14.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 25 11:24:15.066: INFO: Waiting up to 5m0s for pod "pod-fe020770-31a2-4e3d-8a35-417ea1579166" in namespace "emptydir-183" to be "Succeeded or Failed"
Jul 25 11:24:15.093: INFO: Pod "pod-fe020770-31a2-4e3d-8a35-417ea1579166": Phase="Pending", Reason="", readiness=false. Elapsed: 27.142094ms
Jul 25 11:24:17.141: INFO: Pod "pod-fe020770-31a2-4e3d-8a35-417ea1579166": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074942673s
Jul 25 11:24:19.145: INFO: Pod "pod-fe020770-31a2-4e3d-8a35-417ea1579166": Phase="Running", Reason="", readiness=true. Elapsed: 4.07879751s
Jul 25 11:24:21.150: INFO: Pod "pod-fe020770-31a2-4e3d-8a35-417ea1579166": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083821041s
STEP: Saw pod success
Jul 25 11:24:21.150: INFO: Pod "pod-fe020770-31a2-4e3d-8a35-417ea1579166" satisfied condition "Succeeded or Failed"
Jul 25 11:24:21.153: INFO: Trying to get logs from node kali-worker2 pod pod-fe020770-31a2-4e3d-8a35-417ea1579166 container test-container: 
STEP: delete the pod
Jul 25 11:24:21.346: INFO: Waiting for pod pod-fe020770-31a2-4e3d-8a35-417ea1579166 to disappear
Jul 25 11:24:21.369: INFO: Pod pod-fe020770-31a2-4e3d-8a35-417ea1579166 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:24:21.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-183" for this suite.

• [SLOW TEST:6.518 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":3144,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:24:21.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Jul 25 11:24:21.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config api-versions'
Jul 25 11:24:21.718: INFO: stderr: ""
Jul 25 11:24:21.718: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:24:21.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3914" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":177,"skipped":3182,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:24:21.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:24:22.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6120" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3191,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:24:22.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul 25 11:24:23.381: INFO: Pod name wrapped-volume-race-a7ad553e-b7dd-4000-8fc9-c88a78069a5a: Found 0 pods out of 5
Jul 25 11:24:28.388: INFO: Pod name wrapped-volume-race-a7ad553e-b7dd-4000-8fc9-c88a78069a5a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a7ad553e-b7dd-4000-8fc9-c88a78069a5a in namespace emptydir-wrapper-2742, will wait for the garbage collector to delete the pods
Jul 25 11:24:42.828: INFO: Deleting ReplicationController wrapped-volume-race-a7ad553e-b7dd-4000-8fc9-c88a78069a5a took: 7.262763ms
Jul 25 11:24:43.128: INFO: Terminating ReplicationController wrapped-volume-race-a7ad553e-b7dd-4000-8fc9-c88a78069a5a pods took: 300.262332ms
STEP: Creating RC which spawns configmap-volume pods
Jul 25 11:24:53.465: INFO: Pod name wrapped-volume-race-c3fd2703-7274-4874-b66a-936c02684fad: Found 0 pods out of 5
Jul 25 11:24:58.510: INFO: Pod name wrapped-volume-race-c3fd2703-7274-4874-b66a-936c02684fad: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c3fd2703-7274-4874-b66a-936c02684fad in namespace emptydir-wrapper-2742, will wait for the garbage collector to delete the pods
Jul 25 11:25:14.629: INFO: Deleting ReplicationController wrapped-volume-race-c3fd2703-7274-4874-b66a-936c02684fad took: 15.734354ms
Jul 25 11:25:14.930: INFO: Terminating ReplicationController wrapped-volume-race-c3fd2703-7274-4874-b66a-936c02684fad pods took: 300.26144ms
STEP: Creating RC which spawns configmap-volume pods
Jul 25 11:25:23.774: INFO: Pod name wrapped-volume-race-e431c7ff-9647-4795-a600-52062cccdbcc: Found 0 pods out of 5
Jul 25 11:25:28.782: INFO: Pod name wrapped-volume-race-e431c7ff-9647-4795-a600-52062cccdbcc: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e431c7ff-9647-4795-a600-52062cccdbcc in namespace emptydir-wrapper-2742, will wait for the garbage collector to delete the pods
Jul 25 11:25:43.078: INFO: Deleting ReplicationController wrapped-volume-race-e431c7ff-9647-4795-a600-52062cccdbcc took: 23.544704ms
Jul 25 11:25:43.379: INFO: Terminating ReplicationController wrapped-volume-race-e431c7ff-9647-4795-a600-52062cccdbcc pods took: 300.260862ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:25:54.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2742" for this suite.

• [SLOW TEST:92.163 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":179,"skipped":3206,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:25:54.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 25 11:25:54.284: INFO: Waiting up to 5m0s for pod "pod-cf5fc1f5-a820-4501-94e0-13dc091208b5" in namespace "emptydir-7080" to be "Succeeded or Failed"
Jul 25 11:25:54.302: INFO: Pod "pod-cf5fc1f5-a820-4501-94e0-13dc091208b5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.194001ms
Jul 25 11:25:56.333: INFO: Pod "pod-cf5fc1f5-a820-4501-94e0-13dc091208b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048782693s
Jul 25 11:25:58.357: INFO: Pod "pod-cf5fc1f5-a820-4501-94e0-13dc091208b5": Phase="Running", Reason="", readiness=true. Elapsed: 4.073072918s
Jul 25 11:26:00.379: INFO: Pod "pod-cf5fc1f5-a820-4501-94e0-13dc091208b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095056446s
STEP: Saw pod success
Jul 25 11:26:00.379: INFO: Pod "pod-cf5fc1f5-a820-4501-94e0-13dc091208b5" satisfied condition "Succeeded or Failed"
Jul 25 11:26:00.392: INFO: Trying to get logs from node kali-worker pod pod-cf5fc1f5-a820-4501-94e0-13dc091208b5 container test-container: 
STEP: delete the pod
Jul 25 11:26:00.566: INFO: Waiting for pod pod-cf5fc1f5-a820-4501-94e0-13dc091208b5 to disappear
Jul 25 11:26:00.632: INFO: Pod pod-cf5fc1f5-a820-4501-94e0-13dc091208b5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:26:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7080" for this suite.

• [SLOW TEST:6.560 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3247,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:26:00.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-fd70f7f9-5e20-4f00-a00e-4306d3ed1b34
STEP: Creating a pod to test consume configMaps
Jul 25 11:26:01.412: INFO: Waiting up to 5m0s for pod "pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b" in namespace "configmap-3593" to be "Succeeded or Failed"
Jul 25 11:26:01.633: INFO: Pod "pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b": Phase="Pending", Reason="", readiness=false. Elapsed: 221.384174ms
Jul 25 11:26:03.663: INFO: Pod "pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251495154s
Jul 25 11:26:05.734: INFO: Pod "pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b": Phase="Running", Reason="", readiness=true. Elapsed: 4.322466495s
Jul 25 11:26:07.740: INFO: Pod "pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.328185787s
STEP: Saw pod success
Jul 25 11:26:07.740: INFO: Pod "pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b" satisfied condition "Succeeded or Failed"
Jul 25 11:26:07.742: INFO: Trying to get logs from node kali-worker pod pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b container configmap-volume-test: 
STEP: delete the pod
Jul 25 11:26:07.795: INFO: Waiting for pod pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b to disappear
Jul 25 11:26:07.805: INFO: Pod pod-configmaps-83e83008-aa6f-446b-b1d9-4c285d4f863b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:26:07.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3593" for this suite.

• [SLOW TEST:7.064 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3251,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:26:07.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:26:12.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4205" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":182,"skipped":3258,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:26:12.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 25 11:26:12.720: INFO: Waiting up to 5m0s for pod "pod-413a8a51-7277-4e8a-818d-49673c76bb27" in namespace "emptydir-5829" to be "Succeeded or Failed"
Jul 25 11:26:12.746: INFO: Pod "pod-413a8a51-7277-4e8a-818d-49673c76bb27": Phase="Pending", Reason="", readiness=false. Elapsed: 26.154291ms
Jul 25 11:26:14.750: INFO: Pod "pod-413a8a51-7277-4e8a-818d-49673c76bb27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030295909s
Jul 25 11:26:16.754: INFO: Pod "pod-413a8a51-7277-4e8a-818d-49673c76bb27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033852052s
STEP: Saw pod success
Jul 25 11:26:16.754: INFO: Pod "pod-413a8a51-7277-4e8a-818d-49673c76bb27" satisfied condition "Succeeded or Failed"
Jul 25 11:26:16.756: INFO: Trying to get logs from node kali-worker2 pod pod-413a8a51-7277-4e8a-818d-49673c76bb27 container test-container: 
STEP: delete the pod
Jul 25 11:26:16.808: INFO: Waiting for pod pod-413a8a51-7277-4e8a-818d-49673c76bb27 to disappear
Jul 25 11:26:16.841: INFO: Pod pod-413a8a51-7277-4e8a-818d-49673c76bb27 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:26:16.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5829" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3265,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:26:16.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul 25 11:26:16.918: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8820 /api/v1/namespaces/watch-8820/configmaps/e2e-watch-test-watch-closed 34d81eec-b33e-42fc-86f9-a9265bc17c7a 4035130 0 2020-07-25 11:26:16 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-25 11:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 11:26:16.918: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8820 /api/v1/namespaces/watch-8820/configmaps/e2e-watch-test-watch-closed 34d81eec-b33e-42fc-86f9-a9265bc17c7a 4035131 0 2020-07-25 11:26:16 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-25 11:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul 25 11:26:16.934: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8820 /api/v1/namespaces/watch-8820/configmaps/e2e-watch-test-watch-closed 34d81eec-b33e-42fc-86f9-a9265bc17c7a 4035132 0 2020-07-25 11:26:16 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-25 11:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 25 11:26:16.934: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8820 /api/v1/namespaces/watch-8820/configmaps/e2e-watch-test-watch-closed 34d81eec-b33e-42fc-86f9-a9265bc17c7a 4035133 0 2020-07-25 11:26:16 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-25 11:26:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:26:16.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8820" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":184,"skipped":3266,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:26:16.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Jul 25 11:26:17.012: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix915075562/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:26:17.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7089" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":185,"skipped":3273,"failed":0}

------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:26:17.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Jul 25 11:26:17.749: INFO: created pod pod-service-account-defaultsa
Jul 25 11:26:17.749: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul 25 11:26:17.758: INFO: created pod pod-service-account-mountsa
Jul 25 11:26:17.758: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul 25 11:26:17.791: INFO: created pod pod-service-account-nomountsa
Jul 25 11:26:17.791: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul 25 11:26:17.815: INFO: created pod pod-service-account-defaultsa-mountspec
Jul 25 11:26:17.815: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul 25 11:26:17.854: INFO: created pod pod-service-account-mountsa-mountspec
Jul 25 11:26:17.854: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul 25 11:26:17.869: INFO: created pod pod-service-account-nomountsa-mountspec
Jul 25 11:26:17.869: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul 25 11:26:17.912: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul 25 11:26:17.912: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul 25 11:26:17.942: INFO: created pod pod-service-account-mountsa-nomountspec
Jul 25 11:26:17.942: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul 25 11:26:17.981: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul 25 11:26:17.981: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:26:17.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6624" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":186,"skipped":3273,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:26:18.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:26:19.035: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:26:21.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273180, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273178, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:26:23.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273180, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273178, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:26:25.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273180, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273178, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:26:27.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273180, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273178, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:26:29.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273179, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273180, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273178, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:26:32.224: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:26:32.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1620" for this suite.
STEP: Destroying namespace "webhook-1620-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.661 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":187,"skipped":3281,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:26:32.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7213
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jul 25 11:26:32.998: INFO: Found 0 stateful pods, waiting for 3
Jul 25 11:26:43.003: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:26:43.003: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:26:43.003: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 25 11:26:53.003: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:26:53.003: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:26:53.003: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul 25 11:26:53.030: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul 25 11:27:03.113: INFO: Updating stateful set ss2
Jul 25 11:27:03.139: INFO: Waiting for Pod statefulset-7213/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 25 11:27:13.145: INFO: Waiting for Pod statefulset-7213/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jul 25 11:27:23.923: INFO: Found 2 stateful pods, waiting for 3
Jul 25 11:27:33.928: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:27:33.928: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 25 11:27:33.928: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul 25 11:27:33.952: INFO: Updating stateful set ss2
Jul 25 11:27:34.027: INFO: Waiting for Pod statefulset-7213/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 25 11:27:44.055: INFO: Updating stateful set ss2
Jul 25 11:27:44.092: INFO: Waiting for StatefulSet statefulset-7213/ss2 to complete update
Jul 25 11:27:44.092: INFO: Waiting for Pod statefulset-7213/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 25 11:27:54.101: INFO: Deleting all statefulset in ns statefulset-7213
Jul 25 11:27:54.104: INFO: Scaling statefulset ss2 to 0
Jul 25 11:28:14.189: INFO: Waiting for statefulset status.replicas updated to 0
Jul 25 11:28:14.191: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:28:14.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7213" for this suite.

• [SLOW TEST:101.535 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":188,"skipped":3286,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:28:14.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 25 11:28:15.227: INFO: Waiting up to 5m0s for pod "pod-f1aec24f-1e80-4a61-a411-8a2d1b95af11" in namespace "emptydir-5216" to be "Succeeded or Failed"
Jul 25 11:28:15.309: INFO: Pod "pod-f1aec24f-1e80-4a61-a411-8a2d1b95af11": Phase="Pending", Reason="", readiness=false. Elapsed: 82.226504ms
Jul 25 11:28:17.327: INFO: Pod "pod-f1aec24f-1e80-4a61-a411-8a2d1b95af11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099742894s
Jul 25 11:28:19.331: INFO: Pod "pod-f1aec24f-1e80-4a61-a411-8a2d1b95af11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104433735s
STEP: Saw pod success
Jul 25 11:28:19.331: INFO: Pod "pod-f1aec24f-1e80-4a61-a411-8a2d1b95af11" satisfied condition "Succeeded or Failed"
Jul 25 11:28:19.334: INFO: Trying to get logs from node kali-worker2 pod pod-f1aec24f-1e80-4a61-a411-8a2d1b95af11 container test-container: 
STEP: delete the pod
Jul 25 11:28:19.402: INFO: Waiting for pod pod-f1aec24f-1e80-4a61-a411-8a2d1b95af11 to disappear
Jul 25 11:28:19.410: INFO: Pod pod-f1aec24f-1e80-4a61-a411-8a2d1b95af11 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:28:19.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5216" for this suite.

• [SLOW TEST:5.089 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3288,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:28:19.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul 25 11:28:19.527: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 25 11:28:19.564: INFO: Waiting for terminating namespaces to be deleted...
Jul 25 11:28:19.578: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul 25 11:28:19.598: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Jul 25 11:28:19.598: INFO: 	Container kindnet-cni ready: true, restart count 1
Jul 25 11:28:19.598: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Jul 25 11:28:19.598: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 11:28:19.598: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul 25 11:28:19.604: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 11:28:19.604: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 11:28:19.604: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 11:28:19.604: INFO: 	Container kindnet-cni ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Jul 25 11:28:19.725: INFO: Pod kindnet-njbgt requesting resource cpu=100m on Node kali-worker
Jul 25 11:28:19.725: INFO: Pod kindnet-pk4xb requesting resource cpu=100m on Node kali-worker2
Jul 25 11:28:19.725: INFO: Pod kube-proxy-qwsfx requesting resource cpu=0m on Node kali-worker
Jul 25 11:28:19.725: INFO: Pod kube-proxy-vk6jr requesting resource cpu=0m on Node kali-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Jul 25 11:28:19.725: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Jul 25 11:28:19.783: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-957e18dd-4bc6-4398-8545-92c676d853fb.1624fb5284528d48], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7123/filler-pod-957e18dd-4bc6-4398-8545-92c676d853fb to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-957e18dd-4bc6-4398-8545-92c676d853fb.1624fb531b5e14d3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-957e18dd-4bc6-4398-8545-92c676d853fb.1624fb53552e7b0b], Reason = [Created], Message = [Created container filler-pod-957e18dd-4bc6-4398-8545-92c676d853fb]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-957e18dd-4bc6-4398-8545-92c676d853fb.1624fb5366c678f3], Reason = [Started], Message = [Started container filler-pod-957e18dd-4bc6-4398-8545-92c676d853fb]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b547a3e5-dcd6-4e7d-aaa5-00154a32931d.1624fb5282a9b92d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7123/filler-pod-b547a3e5-dcd6-4e7d-aaa5-00154a32931d to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b547a3e5-dcd6-4e7d-aaa5-00154a32931d.1624fb52cd29527a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b547a3e5-dcd6-4e7d-aaa5-00154a32931d.1624fb53190ae7a9], Reason = [Created], Message = [Created container filler-pod-b547a3e5-dcd6-4e7d-aaa5-00154a32931d]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b547a3e5-dcd6-4e7d-aaa5-00154a32931d.1624fb533bc41535], Reason = [Started], Message = [Started container filler-pod-b547a3e5-dcd6-4e7d-aaa5-00154a32931d]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1624fb53f1e0eefa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:28:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7123" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:7.744 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":190,"skipped":3297,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:28:27.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-9783
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[]
Jul 25 11:28:27.289: INFO: Get endpoints failed (23.79589ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul 25 11:28:28.550: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[] (1.284435991s elapsed)
STEP: Creating pod pod1 in namespace services-9783
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[pod1:[100]]
Jul 25 11:28:31.914: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[pod1:[100]] (3.336319762s elapsed)
STEP: Creating pod pod2 in namespace services-9783
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[pod1:[100] pod2:[101]]
Jul 25 11:28:36.306: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[pod1:[100] pod2:[101]] (4.387050402s elapsed)
STEP: Deleting pod pod1 in namespace services-9783
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[pod2:[101]]
Jul 25 11:28:37.372: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[pod2:[101]] (1.061684542s elapsed)
STEP: Deleting pod pod2 in namespace services-9783
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[]
Jul 25 11:28:38.496: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[] (1.120096373s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:28:38.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9783" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.427 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":191,"skipped":3308,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:28:38.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:28:51.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5123" for this suite.

• [SLOW TEST:13.394 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":192,"skipped":3331,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:28:51.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:28:52.071: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jul 25 11:28:54.185: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:28:55.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8422" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":193,"skipped":3361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:28:55.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul 25 11:28:56.479: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:29:05.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3036" for this suite.

• [SLOW TEST:10.409 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":194,"skipped":3384,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:29:05.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:29:38.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6056" for this suite.
STEP: Destroying namespace "nsdeletetest-2372" for this suite.
Jul 25 11:29:38.366: INFO: Namespace nsdeletetest-2372 was already deleted
STEP: Destroying namespace "nsdeletetest-1160" for this suite.

• [SLOW TEST:32.606 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":195,"skipped":3424,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:29:38.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 25 11:29:46.567: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 25 11:29:46.640: INFO: Pod pod-with-poststart-http-hook still exists
Jul 25 11:29:48.640: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 25 11:29:48.645: INFO: Pod pod-with-poststart-http-hook still exists
Jul 25 11:29:50.640: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 25 11:29:50.645: INFO: Pod pod-with-poststart-http-hook still exists
Jul 25 11:29:52.640: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 25 11:29:52.645: INFO: Pod pod-with-poststart-http-hook still exists
Jul 25 11:29:54.641: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 25 11:29:54.645: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:29:54.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9814" for this suite.

• [SLOW TEST:16.284 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3427,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:29:54.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:29:54.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 25 11:29:57.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-322 create -f -'
Jul 25 11:30:04.637: INFO: stderr: ""
Jul 25 11:30:04.637: INFO: stdout: "e2e-test-crd-publish-openapi-6631-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 25 11:30:04.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-322 delete e2e-test-crd-publish-openapi-6631-crds test-cr'
Jul 25 11:30:04.757: INFO: stderr: ""
Jul 25 11:30:04.757: INFO: stdout: "e2e-test-crd-publish-openapi-6631-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jul 25 11:30:04.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-322 apply -f -'
Jul 25 11:30:05.015: INFO: stderr: ""
Jul 25 11:30:05.015: INFO: stdout: "e2e-test-crd-publish-openapi-6631-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 25 11:30:05.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-322 delete e2e-test-crd-publish-openapi-6631-crds test-cr'
Jul 25 11:30:05.123: INFO: stderr: ""
Jul 25 11:30:05.123: INFO: stdout: "e2e-test-crd-publish-openapi-6631-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 25 11:30:05.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6631-crds'
Jul 25 11:30:05.346: INFO: stderr: ""
Jul 25 11:30:05.346: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6631-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:30:08.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-322" for this suite.

• [SLOW TEST:13.627 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":197,"skipped":3482,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:30:08.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:30:13.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7593" for this suite.

• [SLOW TEST:5.138 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":198,"skipped":3498,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:30:13.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-67f31c58-236e-488a-9edb-bd1f4da976ed
STEP: Creating a pod to test consume secrets
Jul 25 11:30:13.577: INFO: Waiting up to 5m0s for pod "pod-secrets-1dfe9ee1-fdf1-40eb-a51a-88bc5fe906ae" in namespace "secrets-5587" to be "Succeeded or Failed"
Jul 25 11:30:13.580: INFO: Pod "pod-secrets-1dfe9ee1-fdf1-40eb-a51a-88bc5fe906ae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.498076ms
Jul 25 11:30:15.583: INFO: Pod "pod-secrets-1dfe9ee1-fdf1-40eb-a51a-88bc5fe906ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006359635s
Jul 25 11:30:17.587: INFO: Pod "pod-secrets-1dfe9ee1-fdf1-40eb-a51a-88bc5fe906ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009899299s
STEP: Saw pod success
Jul 25 11:30:17.587: INFO: Pod "pod-secrets-1dfe9ee1-fdf1-40eb-a51a-88bc5fe906ae" satisfied condition "Succeeded or Failed"
Jul 25 11:30:17.589: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-1dfe9ee1-fdf1-40eb-a51a-88bc5fe906ae container secret-env-test: 
STEP: delete the pod
Jul 25 11:30:17.736: INFO: Waiting for pod pod-secrets-1dfe9ee1-fdf1-40eb-a51a-88bc5fe906ae to disappear
Jul 25 11:30:17.742: INFO: Pod pod-secrets-1dfe9ee1-fdf1-40eb-a51a-88bc5fe906ae no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:30:17.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5587" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3502,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:30:17.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:30:17.949: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6" in namespace "projected-1646" to be "Succeeded or Failed"
Jul 25 11:30:17.958: INFO: Pod "downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.84396ms
Jul 25 11:30:19.988: INFO: Pod "downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039068535s
Jul 25 11:30:21.992: INFO: Pod "downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04228274s
Jul 25 11:30:23.995: INFO: Pod "downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045528069s
STEP: Saw pod success
Jul 25 11:30:23.995: INFO: Pod "downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6" satisfied condition "Succeeded or Failed"
Jul 25 11:30:23.998: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6 container client-container: 
STEP: delete the pod
Jul 25 11:30:24.024: INFO: Waiting for pod downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6 to disappear
Jul 25 11:30:24.030: INFO: Pod downwardapi-volume-1e77cd98-b4e1-49c3-aadd-28a0823236a6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:30:24.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1646" for this suite.

• [SLOW TEST:6.289 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3507,"failed":0}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:30:24.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul 25 11:30:24.121: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 25 11:30:24.148: INFO: Waiting for terminating namespaces to be deleted...
Jul 25 11:30:24.151: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul 25 11:30:24.157: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Jul 25 11:30:24.157: INFO: 	Container kindnet-cni ready: true, restart count 1
Jul 25 11:30:24.157: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Jul 25 11:30:24.157: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 11:30:24.157: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul 25 11:30:24.163: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 11:30:24.163: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 11:30:24.163: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 11:30:24.163: INFO: 	Container kindnet-cni ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d685def2-ceb1-4d2e-a15c-7fd5446b9b06 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-d685def2-ceb1-4d2e-a15c-7fd5446b9b06 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d685def2-ceb1-4d2e-a15c-7fd5446b9b06
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:30:32.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7123" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:8.333 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":201,"skipped":3517,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:30:32.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-301d3999-845c-4b80-b3ad-3921cbcb9e4b
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:30:32.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1763" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":202,"skipped":3522,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:30:32.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:30:33.245: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jul 25 11:30:35.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273433, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273433, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273433, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273433, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:30:38.384: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:30:38.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:30:39.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6541" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:7.273 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":203,"skipped":3525,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:30:39.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:30:39.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jul 25 11:30:40.661: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-25T11:30:40Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-25T11:30:40Z]] name:name1 resourceVersion:4036959 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0053b46c-1d9c-463b-a891-987624eb82f1] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jul 25 11:30:50.667: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-25T11:30:50Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-25T11:30:50Z]] name:name2 resourceVersion:4037000 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3ef3d29b-1f8d-4f74-a870-146670328f47] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jul 25 11:31:00.674: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-25T11:30:40Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-25T11:31:00Z]] name:name1 resourceVersion:4037031 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0053b46c-1d9c-463b-a891-987624eb82f1] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jul 25 11:31:10.679: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-25T11:30:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-25T11:31:10Z]] name:name2 resourceVersion:4037061 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3ef3d29b-1f8d-4f74-a870-146670328f47] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jul 25 11:31:20.687: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-25T11:30:40Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-25T11:31:00Z]] name:name1 resourceVersion:4037091 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0053b46c-1d9c-463b-a891-987624eb82f1] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jul 25 11:31:30.696: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-25T11:30:50Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-25T11:31:10Z]] name:name2 resourceVersion:4037121 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3ef3d29b-1f8d-4f74-a870-146670328f47] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:31:41.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-4120" for this suite.

• [SLOW TEST:61.501 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":204,"skipped":3533,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:31:41.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8458
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-8458
STEP: creating replication controller externalsvc in namespace services-8458
I0725 11:31:41.440405       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8458, replica count: 2
I0725 11:31:44.490900       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 11:31:47.491161       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jul 25 11:31:47.531: INFO: Creating new exec pod
Jul 25 11:31:51.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-8458 execpod2rmmv -- /bin/sh -x -c nslookup clusterip-service'
Jul 25 11:31:51.753: INFO: stderr: "I0725 11:31:51.678730    3383 log.go:172] (0xc00003a420) (0xc000a00000) Create stream\nI0725 11:31:51.678797    3383 log.go:172] (0xc00003a420) (0xc000a00000) Stream added, broadcasting: 1\nI0725 11:31:51.681299    3383 log.go:172] (0xc00003a420) Reply frame received for 1\nI0725 11:31:51.681331    3383 log.go:172] (0xc00003a420) (0xc00056eb40) Create stream\nI0725 11:31:51.681339    3383 log.go:172] (0xc00003a420) (0xc00056eb40) Stream added, broadcasting: 3\nI0725 11:31:51.682171    3383 log.go:172] (0xc00003a420) Reply frame received for 3\nI0725 11:31:51.682194    3383 log.go:172] (0xc00003a420) (0xc000a000a0) Create stream\nI0725 11:31:51.682202    3383 log.go:172] (0xc00003a420) (0xc000a000a0) Stream added, broadcasting: 5\nI0725 11:31:51.683056    3383 log.go:172] (0xc00003a420) Reply frame received for 5\nI0725 11:31:51.742126    3383 log.go:172] (0xc00003a420) Data frame received for 5\nI0725 11:31:51.742155    3383 log.go:172] (0xc000a000a0) (5) Data frame handling\nI0725 11:31:51.742175    3383 log.go:172] (0xc000a000a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0725 11:31:51.746595    3383 log.go:172] (0xc00003a420) Data frame received for 3\nI0725 11:31:51.746615    3383 log.go:172] (0xc00056eb40) (3) Data frame handling\nI0725 11:31:51.746631    3383 log.go:172] (0xc00056eb40) (3) Data frame sent\nI0725 11:31:51.747321    3383 log.go:172] (0xc00003a420) Data frame received for 3\nI0725 11:31:51.747344    3383 log.go:172] (0xc00056eb40) (3) Data frame handling\nI0725 11:31:51.747357    3383 log.go:172] (0xc00056eb40) (3) Data frame sent\nI0725 11:31:51.747630    3383 log.go:172] (0xc00003a420) Data frame received for 5\nI0725 11:31:51.747654    3383 log.go:172] (0xc000a000a0) (5) Data frame handling\nI0725 11:31:51.747834    3383 log.go:172] (0xc00003a420) Data frame received for 3\nI0725 11:31:51.747856    3383 log.go:172] (0xc00056eb40) (3) Data frame handling\nI0725 11:31:51.749387    3383 log.go:172] (0xc00003a420) Data frame received for 1\nI0725 11:31:51.749401    3383 log.go:172] (0xc000a00000) (1) Data frame handling\nI0725 11:31:51.749413    3383 log.go:172] (0xc000a00000) (1) Data frame sent\nI0725 11:31:51.749429    3383 log.go:172] (0xc00003a420) (0xc000a00000) Stream removed, broadcasting: 1\nI0725 11:31:51.749441    3383 log.go:172] (0xc00003a420) Go away received\nI0725 11:31:51.749694    3383 log.go:172] (0xc00003a420) (0xc000a00000) Stream removed, broadcasting: 1\nI0725 11:31:51.749709    3383 log.go:172] (0xc00003a420) (0xc00056eb40) Stream removed, broadcasting: 3\nI0725 11:31:51.749722    3383 log.go:172] (0xc00003a420) (0xc000a000a0) Stream removed, broadcasting: 5\n"
Jul 25 11:31:51.754: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8458.svc.cluster.local\tcanonical name = externalsvc.services-8458.svc.cluster.local.\nName:\texternalsvc.services-8458.svc.cluster.local\nAddress: 10.111.39.23\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-8458, will wait for the garbage collector to delete the pods
Jul 25 11:31:51.813: INFO: Deleting ReplicationController externalsvc took: 6.574185ms
Jul 25 11:31:52.213: INFO: Terminating ReplicationController externalsvc pods took: 400.268275ms
Jul 25 11:32:03.587: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:32:03.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8458" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:22.382 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":205,"skipped":3562,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:32:03.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:32:39.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9321" for this suite.

• [SLOW TEST:35.455 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3569,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:32:39.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:32:39.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed136cde-3309-4db6-bc40-55e28a5641be" in namespace "projected-9114" to be "Succeeded or Failed"
Jul 25 11:32:39.192: INFO: Pod "downwardapi-volume-ed136cde-3309-4db6-bc40-55e28a5641be": Phase="Pending", Reason="", readiness=false. Elapsed: 39.220588ms
Jul 25 11:32:41.196: INFO: Pod "downwardapi-volume-ed136cde-3309-4db6-bc40-55e28a5641be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043181314s
Jul 25 11:32:43.201: INFO: Pod "downwardapi-volume-ed136cde-3309-4db6-bc40-55e28a5641be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04820565s
STEP: Saw pod success
Jul 25 11:32:43.201: INFO: Pod "downwardapi-volume-ed136cde-3309-4db6-bc40-55e28a5641be" satisfied condition "Succeeded or Failed"
Jul 25 11:32:43.205: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ed136cde-3309-4db6-bc40-55e28a5641be container client-container: 
STEP: delete the pod
Jul 25 11:32:43.266: INFO: Waiting for pod downwardapi-volume-ed136cde-3309-4db6-bc40-55e28a5641be to disappear
Jul 25 11:32:43.269: INFO: Pod downwardapi-volume-ed136cde-3309-4db6-bc40-55e28a5641be no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:32:43.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9114" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3577,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:32:43.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 25 11:32:47.934: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0bfeb810-9707-4bae-a219-45d5bc80df72"
Jul 25 11:32:47.934: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0bfeb810-9707-4bae-a219-45d5bc80df72" in namespace "pods-6792" to be "terminated due to deadline exceeded"
Jul 25 11:32:47.959: INFO: Pod "pod-update-activedeadlineseconds-0bfeb810-9707-4bae-a219-45d5bc80df72": Phase="Running", Reason="", readiness=true. Elapsed: 25.125971ms
Jul 25 11:32:49.964: INFO: Pod "pod-update-activedeadlineseconds-0bfeb810-9707-4bae-a219-45d5bc80df72": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.029568379s
Jul 25 11:32:49.964: INFO: Pod "pod-update-activedeadlineseconds-0bfeb810-9707-4bae-a219-45d5bc80df72" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:32:49.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6792" for this suite.

• [SLOW TEST:6.696 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3585,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:32:49.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:32:50.425: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:32:52.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273570, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273570, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273570, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273570, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:32:55.488: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:32:55.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2305" for this suite.
STEP: Destroying namespace "webhook-2305-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.789 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":209,"skipped":3613,"failed":0}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:32:55.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:32:55.868: INFO: Create a RollingUpdate DaemonSet
Jul 25 11:32:55.871: INFO: Check that daemon pods launch on every node of the cluster
Jul 25 11:32:55.911: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:32:55.929: INFO: Number of nodes with available pods: 0
Jul 25 11:32:55.929: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:32:56.935: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:32:56.939: INFO: Number of nodes with available pods: 0
Jul 25 11:32:56.939: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:32:57.954: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:32:57.957: INFO: Number of nodes with available pods: 0
Jul 25 11:32:57.957: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:32:58.978: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:32:58.982: INFO: Number of nodes with available pods: 0
Jul 25 11:32:58.982: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:32:59.934: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:32:59.938: INFO: Number of nodes with available pods: 1
Jul 25 11:32:59.938: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:33:00.933: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:33:00.936: INFO: Number of nodes with available pods: 2
Jul 25 11:33:00.936: INFO: Number of running nodes: 2, number of available pods: 2
Jul 25 11:33:00.936: INFO: Update the DaemonSet to trigger a rollout
Jul 25 11:33:00.946: INFO: Updating DaemonSet daemon-set
Jul 25 11:33:13.965: INFO: Roll back the DaemonSet before rollout is complete
Jul 25 11:33:13.971: INFO: Updating DaemonSet daemon-set
Jul 25 11:33:13.971: INFO: Make sure DaemonSet rollback is complete
Jul 25 11:33:14.038: INFO: Wrong image for pod: daemon-set-vkc8d. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 25 11:33:14.038: INFO: Pod daemon-set-vkc8d is not available
Jul 25 11:33:14.110: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:33:15.115: INFO: Wrong image for pod: daemon-set-vkc8d. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 25 11:33:15.115: INFO: Pod daemon-set-vkc8d is not available
Jul 25 11:33:15.119: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:33:16.115: INFO: Wrong image for pod: daemon-set-vkc8d. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 25 11:33:16.115: INFO: Pod daemon-set-vkc8d is not available
Jul 25 11:33:16.121: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:33:17.115: INFO: Pod daemon-set-gpxg8 is not available
Jul 25 11:33:17.119: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5268, will wait for the garbage collector to delete the pods
Jul 25 11:33:17.189: INFO: Deleting DaemonSet.extensions daemon-set took: 12.999815ms
Jul 25 11:33:17.490: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.245359ms
Jul 25 11:33:23.494: INFO: Number of nodes with available pods: 0
Jul 25 11:33:23.494: INFO: Number of running nodes: 0, number of available pods: 0
Jul 25 11:33:23.496: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5268/daemonsets","resourceVersion":"4037821"},"items":null}

Jul 25 11:33:23.499: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5268/pods","resourceVersion":"4037821"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:33:23.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5268" for this suite.

• [SLOW TEST:27.755 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":210,"skipped":3618,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:33:23.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:33:23.606: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:33:25.610: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Pending, waiting for it to be Running (with Ready = true)
Jul 25 11:33:27.610: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:29.609: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:31.610: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:33.630: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:35.630: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:37.609: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:39.613: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:41.610: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:43.611: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:45.610: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = false)
Jul 25 11:33:47.610: INFO: The status of Pod test-webserver-07936c28-7766-470c-858a-c1485c7a015a is Running (Ready = true)
Jul 25 11:33:47.614: INFO: Container started at 2020-07-25 11:33:25 +0000 UTC, pod became ready at 2020-07-25 11:33:46 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:33:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3688" for this suite.

• [SLOW TEST:24.103 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3642,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:33:47.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-5280/secret-test-04a2dd5a-72b4-4c72-8dde-aa0698f3df29
STEP: Creating a pod to test consume secrets
Jul 25 11:33:47.710: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d15e7b7-f365-4258-80da-ee3233457faa" in namespace "secrets-5280" to be "Succeeded or Failed"
Jul 25 11:33:47.738: INFO: Pod "pod-configmaps-3d15e7b7-f365-4258-80da-ee3233457faa": Phase="Pending", Reason="", readiness=false. Elapsed: 27.610587ms
Jul 25 11:33:49.741: INFO: Pod "pod-configmaps-3d15e7b7-f365-4258-80da-ee3233457faa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030799826s
Jul 25 11:33:51.745: INFO: Pod "pod-configmaps-3d15e7b7-f365-4258-80da-ee3233457faa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034503311s
STEP: Saw pod success
Jul 25 11:33:51.745: INFO: Pod "pod-configmaps-3d15e7b7-f365-4258-80da-ee3233457faa" satisfied condition "Succeeded or Failed"
Jul 25 11:33:51.748: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-3d15e7b7-f365-4258-80da-ee3233457faa container env-test: 
STEP: delete the pod
Jul 25 11:33:51.787: INFO: Waiting for pod pod-configmaps-3d15e7b7-f365-4258-80da-ee3233457faa to disappear
Jul 25 11:33:51.803: INFO: Pod pod-configmaps-3d15e7b7-f365-4258-80da-ee3233457faa no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:33:51.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5280" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3667,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:33:51.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-6ac14fcb-0fe7-42f3-8340-36eff40ecb21
STEP: Creating a pod to test consume secrets
Jul 25 11:33:51.944: INFO: Waiting up to 5m0s for pod "pod-secrets-72b4805e-8d4f-4781-90b8-272e670a2f56" in namespace "secrets-2464" to be "Succeeded or Failed"
Jul 25 11:33:51.961: INFO: Pod "pod-secrets-72b4805e-8d4f-4781-90b8-272e670a2f56": Phase="Pending", Reason="", readiness=false. Elapsed: 17.120328ms
Jul 25 11:33:53.989: INFO: Pod "pod-secrets-72b4805e-8d4f-4781-90b8-272e670a2f56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045703282s
Jul 25 11:33:55.994: INFO: Pod "pod-secrets-72b4805e-8d4f-4781-90b8-272e670a2f56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05029846s
STEP: Saw pod success
Jul 25 11:33:55.994: INFO: Pod "pod-secrets-72b4805e-8d4f-4781-90b8-272e670a2f56" satisfied condition "Succeeded or Failed"
Jul 25 11:33:55.997: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-72b4805e-8d4f-4781-90b8-272e670a2f56 container secret-volume-test: 
STEP: delete the pod
Jul 25 11:33:56.039: INFO: Waiting for pod pod-secrets-72b4805e-8d4f-4781-90b8-272e670a2f56 to disappear
Jul 25 11:33:56.050: INFO: Pod pod-secrets-72b4805e-8d4f-4781-90b8-272e670a2f56 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:33:56.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2464" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3703,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:33:56.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-00ce88de-2e49-4d42-9965-759574ce3f7a
STEP: Creating configMap with name cm-test-opt-upd-aabd2d67-90c0-4ede-9a19-d1ae2e287fa5
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-00ce88de-2e49-4d42-9965-759574ce3f7a
STEP: Updating configmap cm-test-opt-upd-aabd2d67-90c0-4ede-9a19-d1ae2e287fa5
STEP: Creating configMap with name cm-test-opt-create-f7e91107-8504-4751-a49c-f57e9d6445d4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:34:04.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9547" for this suite.

• [SLOW TEST:8.415 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3762,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:34:04.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:34:04.530: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul 25 11:34:04.582: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul 25 11:34:09.601: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 25 11:34:09.601: INFO: Creating deployment "test-rolling-update-deployment"
Jul 25 11:34:09.613: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul 25 11:34:09.621: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul 25 11:34:11.628: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul 25 11:34:11.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273649, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273649, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273649, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273649, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:34:13.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273649, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273649, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273653, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273649, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:34:15.635: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 25 11:34:15.643: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-180 /apis/apps/v1/namespaces/deployment-180/deployments/test-rolling-update-deployment 8b79f29a-b074-41ab-b71b-18b444cc2cca 4038162 1 2020-07-25 11:34:09 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-07-25 11:34:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-25 11:34:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0020f3eb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-25 11:34:09 +0000 UTC,LastTransitionTime:2020-07-25 11:34:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-07-25 11:34:13 +0000 UTC,LastTransitionTime:2020-07-25 11:34:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul 25 11:34:15.646: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-180 /apis/apps/v1/namespaces/deployment-180/replicasets/test-rolling-update-deployment-59d5cb45c7 a8d7e3e7-fa2f-46f9-a748-55de8af4a09d 4038151 1 2020-07-25 11:34:09 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 8b79f29a-b074-41ab-b71b-18b444cc2cca 0xc00282a737 0xc00282a738}] []  [{kube-controller-manager Update apps/v1 2020-07-25 11:34:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 55 57 102 50 57 97 45 98 48 55 52 45 52 49 97 98 45 98 55 49 98 45 49 56 98 52 52 52 99 99 50 99 99 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00282a7c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 25 11:34:15.646: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul 25 11:34:15.646: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-180 /apis/apps/v1/namespaces/deployment-180/replicasets/test-rolling-update-controller ea7c09a8-6f25-486e-ab92-00544843c788 4038161 2 2020-07-25 11:34:04 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 8b79f29a-b074-41ab-b71b-18b444cc2cca 0xc00282a627 0xc00282a628}] []  [{e2e.test Update apps/v1 2020-07-25 11:34:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-25 11:34:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 55 57 102 50 57 97 45 98 48 55 52 45 52 49 97 98 45 98 55 49 98 45 49 56 98 52 52 52 99 99 50 99 99 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00282a6c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 25 11:34:15.649: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-8kp5s" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-8kp5s test-rolling-update-deployment-59d5cb45c7- deployment-180 /api/v1/namespaces/deployment-180/pods/test-rolling-update-deployment-59d5cb45c7-8kp5s 87c5ee65-aa62-4d3b-9d64-d17c364aa50f 4038150 0 2020-07-25 11:34:09 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 a8d7e3e7-fa2f-46f9-a748-55de8af4a09d 0xc00282b137 0xc00282b138}] []  [{kube-controller-manager Update v1 2020-07-25 11:34:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 100 55 101 51 101 55 45 102 97 50 102 45 52 54 102 57 45 97 55 52 56 45 53 53 100 101 56 97 102 52 97 48 57 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 11:34:13 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fjbn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fjbn6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fjbn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:34:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.157,StartTime:2020-07-25 11:34:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-25 11:34:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://261e2b964da776f040d1dc10f7e092d3d9e43213edcd2d91485f225c0e16e99f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:34:15.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-180" for this suite.

• [SLOW TEST:11.175 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":215,"skipped":3769,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:34:15.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-80daedb3-5bb0-423d-8b01-d17aa0aab343
STEP: Creating configMap with name cm-test-opt-upd-c869f1cc-c322-4acf-9c2b-d28a2993420b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-80daedb3-5bb0-423d-8b01-d17aa0aab343
STEP: Updating configmap cm-test-opt-upd-c869f1cc-c322-4acf-9c2b-d28a2993420b
STEP: Creating configMap with name cm-test-opt-create-98228224-a6f0-4f0f-8d48-331e85260a43
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:35:50.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-75" for this suite.

• [SLOW TEST:94.651 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3806,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:35:50.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:35:50.365: INFO: Waiting up to 5m0s for pod "busybox-user-65534-46bdf1af-b8ff-4e36-8722-2f5c1e1a30df" in namespace "security-context-test-536" to be "Succeeded or Failed"
Jul 25 11:35:50.385: INFO: Pod "busybox-user-65534-46bdf1af-b8ff-4e36-8722-2f5c1e1a30df": Phase="Pending", Reason="", readiness=false. Elapsed: 19.728361ms
Jul 25 11:35:52.389: INFO: Pod "busybox-user-65534-46bdf1af-b8ff-4e36-8722-2f5c1e1a30df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023708068s
Jul 25 11:35:54.393: INFO: Pod "busybox-user-65534-46bdf1af-b8ff-4e36-8722-2f5c1e1a30df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027465692s
Jul 25 11:35:54.393: INFO: Pod "busybox-user-65534-46bdf1af-b8ff-4e36-8722-2f5c1e1a30df" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:35:54.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-536" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3814,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:35:54.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jul 25 11:35:54.450: INFO: >>> kubeConfig: /root/.kube/config
Jul 25 11:35:56.394: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:06.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7050" for this suite.

• [SLOW TEST:11.733 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":218,"skipped":3830,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:06.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:36:06.191: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:12.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4457" for this suite.

• [SLOW TEST:6.349 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":219,"skipped":3848,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:12.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-881a6b73-b03c-4c0a-a349-4a5cf4b622dd
STEP: Creating secret with name s-test-opt-upd-cd3478a9-ff4f-4cec-93ce-e87b2149b6b1
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-881a6b73-b03c-4c0a-a349-4a5cf4b622dd
STEP: Updating secret s-test-opt-upd-cd3478a9-ff4f-4cec-93ce-e87b2149b6b1
STEP: Creating secret with name s-test-opt-create-a72cf569-4d6a-4bfd-8f0f-c2fec011f4d3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:20.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9774" for this suite.

• [SLOW TEST:8.426 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3849,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:20.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:36:20.997: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1" in namespace "downward-api-7149" to be "Succeeded or Failed"
Jul 25 11:36:21.001: INFO: Pod "downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.819689ms
Jul 25 11:36:23.134: INFO: Pod "downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136629846s
Jul 25 11:36:25.138: INFO: Pod "downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141013279s
Jul 25 11:36:27.142: INFO: Pod "downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145089729s
STEP: Saw pod success
Jul 25 11:36:27.142: INFO: Pod "downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1" satisfied condition "Succeeded or Failed"
Jul 25 11:36:27.145: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1 container client-container: 
STEP: delete the pod
Jul 25 11:36:27.351: INFO: Waiting for pod downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1 to disappear
Jul 25 11:36:27.505: INFO: Pod downwardapi-volume-f844d8d3-c536-4ac4-bf66-2292eb99c8c1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:27.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7149" for this suite.

• [SLOW TEST:6.634 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3863,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:27.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-ba52b503-af01-4904-b0b1-6fea5edc3767
STEP: Creating a pod to test consume configMaps
Jul 25 11:36:27.957: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1923f32f-3e32-4c5e-a266-887a4020b9f2" in namespace "projected-9546" to be "Succeeded or Failed"
Jul 25 11:36:27.966: INFO: Pod "pod-projected-configmaps-1923f32f-3e32-4c5e-a266-887a4020b9f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.903669ms
Jul 25 11:36:29.970: INFO: Pod "pod-projected-configmaps-1923f32f-3e32-4c5e-a266-887a4020b9f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012779803s
Jul 25 11:36:31.974: INFO: Pod "pod-projected-configmaps-1923f32f-3e32-4c5e-a266-887a4020b9f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016857607s
STEP: Saw pod success
Jul 25 11:36:31.974: INFO: Pod "pod-projected-configmaps-1923f32f-3e32-4c5e-a266-887a4020b9f2" satisfied condition "Succeeded or Failed"
Jul 25 11:36:31.990: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-1923f32f-3e32-4c5e-a266-887a4020b9f2 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 25 11:36:32.031: INFO: Waiting for pod pod-projected-configmaps-1923f32f-3e32-4c5e-a266-887a4020b9f2 to disappear
Jul 25 11:36:32.049: INFO: Pod pod-projected-configmaps-1923f32f-3e32-4c5e-a266-887a4020b9f2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:32.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9546" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3875,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:32.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:36:32.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a499bc23-8e83-498e-898b-91017ef51026" in namespace "downward-api-1449" to be "Succeeded or Failed"
Jul 25 11:36:32.377: INFO: Pod "downwardapi-volume-a499bc23-8e83-498e-898b-91017ef51026": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227842ms
Jul 25 11:36:34.382: INFO: Pod "downwardapi-volume-a499bc23-8e83-498e-898b-91017ef51026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008983205s
Jul 25 11:36:36.386: INFO: Pod "downwardapi-volume-a499bc23-8e83-498e-898b-91017ef51026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013042866s
STEP: Saw pod success
Jul 25 11:36:36.386: INFO: Pod "downwardapi-volume-a499bc23-8e83-498e-898b-91017ef51026" satisfied condition "Succeeded or Failed"
Jul 25 11:36:36.389: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-a499bc23-8e83-498e-898b-91017ef51026 container client-container: 
STEP: delete the pod
Jul 25 11:36:36.434: INFO: Waiting for pod downwardapi-volume-a499bc23-8e83-498e-898b-91017ef51026 to disappear
Jul 25 11:36:36.470: INFO: Pod downwardapi-volume-a499bc23-8e83-498e-898b-91017ef51026 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:36.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1449" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3876,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:36.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-c44d23d8-a733-482c-b6dd-cc668091629f
STEP: Creating a pod to test consume secrets
Jul 25 11:36:36.555: INFO: Waiting up to 5m0s for pod "pod-secrets-934f2707-b6e3-4d53-8deb-f43cc868a426" in namespace "secrets-5374" to be "Succeeded or Failed"
Jul 25 11:36:36.566: INFO: Pod "pod-secrets-934f2707-b6e3-4d53-8deb-f43cc868a426": Phase="Pending", Reason="", readiness=false. Elapsed: 11.222003ms
Jul 25 11:36:38.570: INFO: Pod "pod-secrets-934f2707-b6e3-4d53-8deb-f43cc868a426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014624138s
Jul 25 11:36:40.574: INFO: Pod "pod-secrets-934f2707-b6e3-4d53-8deb-f43cc868a426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018811411s
STEP: Saw pod success
Jul 25 11:36:40.574: INFO: Pod "pod-secrets-934f2707-b6e3-4d53-8deb-f43cc868a426" satisfied condition "Succeeded or Failed"
Jul 25 11:36:40.577: INFO: Trying to get logs from node kali-worker pod pod-secrets-934f2707-b6e3-4d53-8deb-f43cc868a426 container secret-volume-test: 
STEP: delete the pod
Jul 25 11:36:40.639: INFO: Waiting for pod pod-secrets-934f2707-b6e3-4d53-8deb-f43cc868a426 to disappear
Jul 25 11:36:40.643: INFO: Pod pod-secrets-934f2707-b6e3-4d53-8deb-f43cc868a426 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:40.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5374" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3886,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:40.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Jul 25 11:36:40.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config cluster-info'
Jul 25 11:36:40.806: INFO: stderr: ""
Jul 25 11:36:40.806: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:40.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2054" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":225,"skipped":3888,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:40.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:36:40.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1015770-4c24-43ef-852b-df22ed8aab0e" in namespace "projected-2979" to be "Succeeded or Failed"
Jul 25 11:36:40.900: INFO: Pod "downwardapi-volume-a1015770-4c24-43ef-852b-df22ed8aab0e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.420136ms
Jul 25 11:36:42.905: INFO: Pod "downwardapi-volume-a1015770-4c24-43ef-852b-df22ed8aab0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013849988s
Jul 25 11:36:44.910: INFO: Pod "downwardapi-volume-a1015770-4c24-43ef-852b-df22ed8aab0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018825911s
STEP: Saw pod success
Jul 25 11:36:44.910: INFO: Pod "downwardapi-volume-a1015770-4c24-43ef-852b-df22ed8aab0e" satisfied condition "Succeeded or Failed"
Jul 25 11:36:44.913: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a1015770-4c24-43ef-852b-df22ed8aab0e container client-container: 
STEP: delete the pod
Jul 25 11:36:45.081: INFO: Waiting for pod downwardapi-volume-a1015770-4c24-43ef-852b-df22ed8aab0e to disappear
Jul 25 11:36:45.302: INFO: Pod downwardapi-volume-a1015770-4c24-43ef-852b-df22ed8aab0e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:45.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2979" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3918,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:45.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:36:45.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dfebf28-9847-4e2f-98f5-7a442b34399d" in namespace "projected-1049" to be "Succeeded or Failed"
Jul 25 11:36:45.440: INFO: Pod "downwardapi-volume-0dfebf28-9847-4e2f-98f5-7a442b34399d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.160584ms
Jul 25 11:36:47.848: INFO: Pod "downwardapi-volume-0dfebf28-9847-4e2f-98f5-7a442b34399d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.445927053s
Jul 25 11:36:49.852: INFO: Pod "downwardapi-volume-0dfebf28-9847-4e2f-98f5-7a442b34399d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.450332657s
STEP: Saw pod success
Jul 25 11:36:49.852: INFO: Pod "downwardapi-volume-0dfebf28-9847-4e2f-98f5-7a442b34399d" satisfied condition "Succeeded or Failed"
Jul 25 11:36:49.855: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-0dfebf28-9847-4e2f-98f5-7a442b34399d container client-container: 
STEP: delete the pod
Jul 25 11:36:49.937: INFO: Waiting for pod downwardapi-volume-0dfebf28-9847-4e2f-98f5-7a442b34399d to disappear
Jul 25 11:36:49.940: INFO: Pod downwardapi-volume-0dfebf28-9847-4e2f-98f5-7a442b34399d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:36:49.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1049" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3918,"failed":0}

------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:36:49.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-t25f
STEP: Creating a pod to test atomic-volume-subpath
Jul 25 11:36:50.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-t25f" in namespace "subpath-2685" to be "Succeeded or Failed"
Jul 25 11:36:50.086: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.716182ms
Jul 25 11:36:52.206: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128726728s
Jul 25 11:36:54.211: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 4.13370176s
Jul 25 11:36:56.219: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 6.140946778s
Jul 25 11:36:58.223: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 8.144853468s
Jul 25 11:37:00.227: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 10.148916269s
Jul 25 11:37:02.236: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 12.158416006s
Jul 25 11:37:04.241: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 14.162887604s
Jul 25 11:37:06.261: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 16.182874092s
Jul 25 11:37:08.284: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 18.206487113s
Jul 25 11:37:10.288: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 20.210367724s
Jul 25 11:37:12.292: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Running", Reason="", readiness=true. Elapsed: 22.214068036s
Jul 25 11:37:14.295: INFO: Pod "pod-subpath-test-secret-t25f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.217011631s
STEP: Saw pod success
Jul 25 11:37:14.295: INFO: Pod "pod-subpath-test-secret-t25f" satisfied condition "Succeeded or Failed"
Jul 25 11:37:14.297: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-t25f container test-container-subpath-secret-t25f: 
STEP: delete the pod
Jul 25 11:37:14.352: INFO: Waiting for pod pod-subpath-test-secret-t25f to disappear
Jul 25 11:37:14.359: INFO: Pod pod-subpath-test-secret-t25f no longer exists
STEP: Deleting pod pod-subpath-test-secret-t25f
Jul 25 11:37:14.359: INFO: Deleting pod "pod-subpath-test-secret-t25f" in namespace "subpath-2685"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:37:14.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2685" for this suite.

• [SLOW TEST:24.421 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":228,"skipped":3918,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:37:14.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 25 11:37:14.445: INFO: Waiting up to 5m0s for pod "pod-ab37651d-b0d1-4fc6-a72d-e0449f51f3a8" in namespace "emptydir-6175" to be "Succeeded or Failed"
Jul 25 11:37:14.476: INFO: Pod "pod-ab37651d-b0d1-4fc6-a72d-e0449f51f3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.869555ms
Jul 25 11:37:16.540: INFO: Pod "pod-ab37651d-b0d1-4fc6-a72d-e0449f51f3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09507808s
Jul 25 11:37:18.543: INFO: Pod "pod-ab37651d-b0d1-4fc6-a72d-e0449f51f3a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098745733s
STEP: Saw pod success
Jul 25 11:37:18.544: INFO: Pod "pod-ab37651d-b0d1-4fc6-a72d-e0449f51f3a8" satisfied condition "Succeeded or Failed"
Jul 25 11:37:18.546: INFO: Trying to get logs from node kali-worker pod pod-ab37651d-b0d1-4fc6-a72d-e0449f51f3a8 container test-container: 
STEP: delete the pod
Jul 25 11:37:18.639: INFO: Waiting for pod pod-ab37651d-b0d1-4fc6-a72d-e0449f51f3a8 to disappear
Jul 25 11:37:18.650: INFO: Pod pod-ab37651d-b0d1-4fc6-a72d-e0449f51f3a8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:37:18.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6175" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3953,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:37:18.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 25 11:37:18.785: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:18.806: INFO: Number of nodes with available pods: 0
Jul 25 11:37:18.806: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:19.811: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:19.814: INFO: Number of nodes with available pods: 0
Jul 25 11:37:19.814: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:20.867: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:20.897: INFO: Number of nodes with available pods: 0
Jul 25 11:37:20.897: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:21.812: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:21.815: INFO: Number of nodes with available pods: 0
Jul 25 11:37:21.815: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:22.811: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:22.815: INFO: Number of nodes with available pods: 0
Jul 25 11:37:22.815: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:23.828: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:23.832: INFO: Number of nodes with available pods: 2
Jul 25 11:37:23.832: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul 25 11:37:23.851: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:23.854: INFO: Number of nodes with available pods: 1
Jul 25 11:37:23.854: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:24.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:24.863: INFO: Number of nodes with available pods: 1
Jul 25 11:37:24.863: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:25.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:25.861: INFO: Number of nodes with available pods: 1
Jul 25 11:37:25.861: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:26.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:26.863: INFO: Number of nodes with available pods: 1
Jul 25 11:37:26.863: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:27.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:27.862: INFO: Number of nodes with available pods: 1
Jul 25 11:37:27.862: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:28.860: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:28.864: INFO: Number of nodes with available pods: 1
Jul 25 11:37:28.864: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:29.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:29.863: INFO: Number of nodes with available pods: 1
Jul 25 11:37:29.863: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:30.866: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:30.869: INFO: Number of nodes with available pods: 1
Jul 25 11:37:30.869: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:37:31.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:37:31.862: INFO: Number of nodes with available pods: 2
Jul 25 11:37:31.862: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3267, will wait for the garbage collector to delete the pods
Jul 25 11:37:31.927: INFO: Deleting DaemonSet.extensions daemon-set took: 8.643075ms
Jul 25 11:37:32.227: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.322651ms
Jul 25 11:37:43.430: INFO: Number of nodes with available pods: 0
Jul 25 11:37:43.430: INFO: Number of running nodes: 0, number of available pods: 0
Jul 25 11:37:43.432: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3267/daemonsets","resourceVersion":"4039254"},"items":null}

Jul 25 11:37:43.434: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3267/pods","resourceVersion":"4039254"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:37:43.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3267" for this suite.

• [SLOW TEST:24.827 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":230,"skipped":3959,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:37:43.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jul 25 11:37:43.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jul 25 11:37:54.165: INFO: >>> kubeConfig: /root/.kube/config
Jul 25 11:37:57.104: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:38:07.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5733" for this suite.

• [SLOW TEST:24.297 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":231,"skipped":3978,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:38:07.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Jul 25 11:38:07.839: INFO: Waiting up to 5m0s for pod "client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c" in namespace "containers-9227" to be "Succeeded or Failed"
Jul 25 11:38:07.843: INFO: Pod "client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374702ms
Jul 25 11:38:09.880: INFO: Pod "client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0407873s
Jul 25 11:38:11.914: INFO: Pod "client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c": Phase="Running", Reason="", readiness=true. Elapsed: 4.075002068s
Jul 25 11:38:13.918: INFO: Pod "client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078826138s
STEP: Saw pod success
Jul 25 11:38:13.918: INFO: Pod "client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c" satisfied condition "Succeeded or Failed"
Jul 25 11:38:13.921: INFO: Trying to get logs from node kali-worker2 pod client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c container test-container: 
STEP: delete the pod
Jul 25 11:38:13.967: INFO: Waiting for pod client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c to disappear
Jul 25 11:38:13.978: INFO: Pod client-containers-a0b3443e-3893-427b-bad3-286cbb665a7c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:38:13.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9227" for this suite.

• [SLOW TEST:6.203 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":4001,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:38:13.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul 25 11:38:14.147: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 25 11:38:14.202: INFO: Waiting for terminating namespaces to be deleted...
Jul 25 11:38:14.204: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul 25 11:38:14.210: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Jul 25 11:38:14.210: INFO: 	Container kindnet-cni ready: true, restart count 1
Jul 25 11:38:14.210: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Jul 25 11:38:14.210: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 25 11:38:14.210: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul 25 11:38:14.215: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 11:38:14.215: INFO: 	Container kindnet-cni ready: true, restart count 1
Jul 25 11:38:14.215: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 25 11:38:14.215: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1624fbdcea26dab8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:38:15.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9416" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":233,"skipped":4054,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:38:15.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 25 11:38:15.433: INFO: Waiting up to 5m0s for pod "pod-1c21e8e9-0536-4c83-993e-b4977a0b442d" in namespace "emptydir-9735" to be "Succeeded or Failed"
Jul 25 11:38:15.445: INFO: Pod "pod-1c21e8e9-0536-4c83-993e-b4977a0b442d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.329021ms
Jul 25 11:38:17.450: INFO: Pod "pod-1c21e8e9-0536-4c83-993e-b4977a0b442d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016479554s
Jul 25 11:38:19.477: INFO: Pod "pod-1c21e8e9-0536-4c83-993e-b4977a0b442d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043421936s
STEP: Saw pod success
Jul 25 11:38:19.477: INFO: Pod "pod-1c21e8e9-0536-4c83-993e-b4977a0b442d" satisfied condition "Succeeded or Failed"
Jul 25 11:38:19.480: INFO: Trying to get logs from node kali-worker pod pod-1c21e8e9-0536-4c83-993e-b4977a0b442d container test-container: 
STEP: delete the pod
Jul 25 11:38:19.658: INFO: Waiting for pod pod-1c21e8e9-0536-4c83-993e-b4977a0b442d to disappear
Jul 25 11:38:19.690: INFO: Pod pod-1c21e8e9-0536-4c83-993e-b4977a0b442d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:38:19.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9735" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4063,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:38:19.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2511
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-2511
Jul 25 11:38:19.820: INFO: Found 0 stateful pods, waiting for 1
Jul 25 11:38:29.824: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 25 11:38:29.862: INFO: Deleting all statefulset in ns statefulset-2511
Jul 25 11:38:29.877: INFO: Scaling statefulset ss to 0
Jul 25 11:38:49.986: INFO: Waiting for statefulset status.replicas updated to 0
Jul 25 11:38:49.988: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:38:50.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2511" for this suite.

• [SLOW TEST:30.319 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":235,"skipped":4093,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:38:50.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:39:06.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8165" for this suite.

• [SLOW TEST:16.423 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":236,"skipped":4101,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:39:06.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0725 11:39:07.785114       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 25 11:39:07.785: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:39:07.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1896" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":237,"skipped":4155,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:39:07.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:39:08.268: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul 25 11:39:08.339: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:08.387: INFO: Number of nodes with available pods: 0
Jul 25 11:39:08.387: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:09.592: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:09.598: INFO: Number of nodes with available pods: 0
Jul 25 11:39:09.598: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:10.391: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:10.394: INFO: Number of nodes with available pods: 0
Jul 25 11:39:10.394: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:11.406: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:11.409: INFO: Number of nodes with available pods: 0
Jul 25 11:39:11.410: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:12.425: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:12.466: INFO: Number of nodes with available pods: 0
Jul 25 11:39:12.466: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:13.404: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:13.465: INFO: Number of nodes with available pods: 0
Jul 25 11:39:13.465: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:14.641: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:14.644: INFO: Number of nodes with available pods: 1
Jul 25 11:39:14.644: INFO: Node kali-worker2 is running more than one daemon pod
Jul 25 11:39:15.391: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:15.394: INFO: Number of nodes with available pods: 2
Jul 25 11:39:15.394: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul 25 11:39:15.469: INFO: Wrong image for pod: daemon-set-8hcc2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:15.469: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:15.562: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:16.566: INFO: Wrong image for pod: daemon-set-8hcc2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:16.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:16.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:17.566: INFO: Wrong image for pod: daemon-set-8hcc2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:17.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:17.570: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:18.566: INFO: Wrong image for pod: daemon-set-8hcc2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:18.566: INFO: Pod daemon-set-8hcc2 is not available
Jul 25 11:39:18.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:18.570: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:19.566: INFO: Wrong image for pod: daemon-set-8hcc2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:19.566: INFO: Pod daemon-set-8hcc2 is not available
Jul 25 11:39:19.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:19.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:20.566: INFO: Wrong image for pod: daemon-set-8hcc2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:20.566: INFO: Pod daemon-set-8hcc2 is not available
Jul 25 11:39:20.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:20.571: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:21.565: INFO: Wrong image for pod: daemon-set-8hcc2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:21.565: INFO: Pod daemon-set-8hcc2 is not available
Jul 25 11:39:21.565: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:21.568: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:22.566: INFO: Wrong image for pod: daemon-set-8hcc2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:22.566: INFO: Pod daemon-set-8hcc2 is not available
Jul 25 11:39:22.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:22.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:23.565: INFO: Pod daemon-set-7g588 is not available
Jul 25 11:39:23.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:23.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:24.566: INFO: Pod daemon-set-7g588 is not available
Jul 25 11:39:24.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:24.570: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:25.566: INFO: Pod daemon-set-7g588 is not available
Jul 25 11:39:25.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:25.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:26.574: INFO: Pod daemon-set-7g588 is not available
Jul 25 11:39:26.574: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:26.577: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:27.873: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:27.876: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:28.567: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:28.569: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:29.579: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:29.579: INFO: Pod daemon-set-gplzd is not available
Jul 25 11:39:29.584: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:30.566: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:30.566: INFO: Pod daemon-set-gplzd is not available
Jul 25 11:39:30.571: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:31.591: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:31.591: INFO: Pod daemon-set-gplzd is not available
Jul 25 11:39:31.595: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:32.567: INFO: Wrong image for pod: daemon-set-gplzd. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 25 11:39:32.567: INFO: Pod daemon-set-gplzd is not available
Jul 25 11:39:32.572: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:33.566: INFO: Pod daemon-set-hjds5 is not available
Jul 25 11:39:33.571: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul 25 11:39:33.597: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:33.633: INFO: Number of nodes with available pods: 1
Jul 25 11:39:33.633: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:34.637: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:34.640: INFO: Number of nodes with available pods: 1
Jul 25 11:39:34.640: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:35.640: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:35.645: INFO: Number of nodes with available pods: 1
Jul 25 11:39:35.645: INFO: Node kali-worker is running more than one daemon pod
Jul 25 11:39:36.694: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 25 11:39:36.698: INFO: Number of nodes with available pods: 2
Jul 25 11:39:36.698: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7599, will wait for the garbage collector to delete the pods
Jul 25 11:39:36.772: INFO: Deleting DaemonSet.extensions daemon-set took: 6.80004ms
Jul 25 11:39:37.073: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.277265ms
Jul 25 11:39:43.759: INFO: Number of nodes with available pods: 0
Jul 25 11:39:43.759: INFO: Number of running nodes: 0, number of available pods: 0
Jul 25 11:39:43.762: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7599/daemonsets","resourceVersion":"4039991"},"items":null}

Jul 25 11:39:43.765: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7599/pods","resourceVersion":"4039991"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:39:43.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7599" for this suite.

• [SLOW TEST:35.992 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":238,"skipped":4168,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:39:43.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 25 11:39:44.007: INFO: Waiting up to 5m0s for pod "pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331" in namespace "emptydir-4881" to be "Succeeded or Failed"
Jul 25 11:39:44.015: INFO: Pod "pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331": Phase="Pending", Reason="", readiness=false. Elapsed: 7.614352ms
Jul 25 11:39:46.095: INFO: Pod "pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087651953s
Jul 25 11:39:48.099: INFO: Pod "pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331": Phase="Running", Reason="", readiness=true. Elapsed: 4.092048943s
Jul 25 11:39:50.103: INFO: Pod "pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096535852s
STEP: Saw pod success
Jul 25 11:39:50.104: INFO: Pod "pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331" satisfied condition "Succeeded or Failed"
Jul 25 11:39:50.107: INFO: Trying to get logs from node kali-worker pod pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331 container test-container: 
STEP: delete the pod
Jul 25 11:39:50.167: INFO: Waiting for pod pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331 to disappear
Jul 25 11:39:50.191: INFO: Pod pod-a6055fb4-65e3-4ac0-ad6a-d129c73c7331 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:39:50.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4881" for this suite.

• [SLOW TEST:6.424 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4204,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:39:50.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-a280d07c-ab70-40de-a25c-674d61572029
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a280d07c-ab70-40de-a25c-674d61572029
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:39:56.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8797" for this suite.

• [SLOW TEST:6.286 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4252,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:39:56.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:39:58.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:40:00.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273998, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273998, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273998, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273998, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:40:02.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273998, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273998, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273998, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731273998, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:40:05.682: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jul 25 11:40:05.705: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:40:05.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8213" for this suite.
STEP: Destroying namespace "webhook-8213-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.375 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":241,"skipped":4258,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:40:05.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:40:07.173: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:40:09.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274007, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274007, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274007, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274007, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:40:11.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274007, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274007, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274007, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274007, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:40:14.215: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:40:14.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-936" for this suite.
STEP: Destroying namespace "webhook-936-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.455 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":242,"skipped":4280,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:40:14.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Jul 25 11:40:14.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7327'
Jul 25 11:40:18.321: INFO: stderr: ""
Jul 25 11:40:18.321: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 25 11:40:19.401: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:40:19.401: INFO: Found 0 / 1
Jul 25 11:40:20.325: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:40:20.325: INFO: Found 0 / 1
Jul 25 11:40:21.325: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:40:21.325: INFO: Found 0 / 1
Jul 25 11:40:22.325: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:40:22.325: INFO: Found 1 / 1
Jul 25 11:40:22.325: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul 25 11:40:22.328: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:40:22.328: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 25 11:40:22.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config patch pod agnhost-master-l9dnd --namespace=kubectl-7327 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul 25 11:40:22.437: INFO: stderr: ""
Jul 25 11:40:22.437: INFO: stdout: "pod/agnhost-master-l9dnd patched\n"
STEP: checking annotations
Jul 25 11:40:22.459: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 25 11:40:22.459: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:40:22.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7327" for this suite.

• [SLOW TEST:8.141 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":243,"skipped":4288,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:40:22.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:40:22.557: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:40:23.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6937" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":244,"skipped":4295,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:40:23.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:40:23.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514" in namespace "downward-api-4678" to be "Succeeded or Failed"
Jul 25 11:40:23.715: INFO: Pod "downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031728ms
Jul 25 11:40:25.819: INFO: Pod "downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10989158s
Jul 25 11:40:27.843: INFO: Pod "downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514": Phase="Running", Reason="", readiness=true. Elapsed: 4.133638841s
Jul 25 11:40:29.847: INFO: Pod "downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138056483s
STEP: Saw pod success
Jul 25 11:40:29.847: INFO: Pod "downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514" satisfied condition "Succeeded or Failed"
Jul 25 11:40:29.850: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514 container client-container: 
STEP: delete the pod
Jul 25 11:40:29.900: INFO: Waiting for pod downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514 to disappear
Jul 25 11:40:29.909: INFO: Pod downwardapi-volume-3d642e97-1476-454e-adc3-8b4ca843e514 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:40:29.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4678" for this suite.

• [SLOW TEST:6.298 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4297,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:40:29.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:40:45.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2108" for this suite.

• [SLOW TEST:16.088 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":246,"skipped":4307,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:40:46.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-bh4c
STEP: Creating a pod to test atomic-volume-subpath
Jul 25 11:40:46.241: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-bh4c" in namespace "subpath-2043" to be "Succeeded or Failed"
Jul 25 11:40:46.307: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Pending", Reason="", readiness=false. Elapsed: 65.82731ms
Jul 25 11:40:48.311: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069967179s
Jul 25 11:40:50.315: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.0738409s
Jul 25 11:40:52.319: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 6.077753301s
Jul 25 11:40:54.323: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 8.081957067s
Jul 25 11:40:56.340: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 10.098634812s
Jul 25 11:40:58.344: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 12.102799516s
Jul 25 11:41:00.348: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 14.106941542s
Jul 25 11:41:02.351: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 16.109892048s
Jul 25 11:41:04.356: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 18.114307726s
Jul 25 11:41:06.360: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 20.118566446s
Jul 25 11:41:08.364: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 22.122593285s
Jul 25 11:41:10.369: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Running", Reason="", readiness=true. Elapsed: 24.12702726s
Jul 25 11:41:12.372: INFO: Pod "pod-subpath-test-projected-bh4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.130689374s
STEP: Saw pod success
Jul 25 11:41:12.372: INFO: Pod "pod-subpath-test-projected-bh4c" satisfied condition "Succeeded or Failed"
Jul 25 11:41:12.375: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-bh4c container test-container-subpath-projected-bh4c: 
STEP: delete the pod
Jul 25 11:41:12.416: INFO: Waiting for pod pod-subpath-test-projected-bh4c to disappear
Jul 25 11:41:12.432: INFO: Pod pod-subpath-test-projected-bh4c no longer exists
STEP: Deleting pod pod-subpath-test-projected-bh4c
Jul 25 11:41:12.432: INFO: Deleting pod "pod-subpath-test-projected-bh4c" in namespace "subpath-2043"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:41:12.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2043" for this suite.

• [SLOW TEST:26.436 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":247,"skipped":4320,"failed":0}
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:41:12.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-7bde4769-99a9-4a69-a358-3e237e4bacdf
Jul 25 11:41:12.522: INFO: Pod name my-hostname-basic-7bde4769-99a9-4a69-a358-3e237e4bacdf: Found 0 pods out of 1
Jul 25 11:41:17.526: INFO: Pod name my-hostname-basic-7bde4769-99a9-4a69-a358-3e237e4bacdf: Found 1 pods out of 1
Jul 25 11:41:17.526: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7bde4769-99a9-4a69-a358-3e237e4bacdf" are running
Jul 25 11:41:17.529: INFO: Pod "my-hostname-basic-7bde4769-99a9-4a69-a358-3e237e4bacdf-29989" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-25 11:41:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-25 11:41:15 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-25 11:41:15 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-25 11:41:12 +0000 UTC Reason: Message:}])
Jul 25 11:41:17.529: INFO: Trying to dial the pod
Jul 25 11:41:22.539: INFO: Controller my-hostname-basic-7bde4769-99a9-4a69-a358-3e237e4bacdf: Got expected result from replica 1 [my-hostname-basic-7bde4769-99a9-4a69-a358-3e237e4bacdf-29989]: "my-hostname-basic-7bde4769-99a9-4a69-a358-3e237e4bacdf-29989", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:41:22.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3565" for this suite.

• [SLOW TEST:10.105 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":248,"skipped":4320,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:41:22.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-b6f27262-0a25-4f20-8b65-bc27d4fb6ab3 in namespace container-probe-5260
Jul 25 11:41:26.683: INFO: Started pod liveness-b6f27262-0a25-4f20-8b65-bc27d4fb6ab3 in namespace container-probe-5260
STEP: checking the pod's current state and verifying that restartCount is present
Jul 25 11:41:26.686: INFO: Initial restart count of pod liveness-b6f27262-0a25-4f20-8b65-bc27d4fb6ab3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:45:28.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5260" for this suite.

• [SLOW TEST:246.288 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4325,"failed":0}
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:45:28.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:45:29.065: INFO: Creating deployment "test-recreate-deployment"
Jul 25 11:45:29.085: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul 25 11:45:29.163: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jul 25 11:45:31.268: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul 25 11:45:31.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274329, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274329, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274329, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274329, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:45:33.275: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul 25 11:45:33.283: INFO: Updating deployment test-recreate-deployment
Jul 25 11:45:33.283: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 25 11:45:33.966: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-620 /apis/apps/v1/namespaces/deployment-620/deployments/test-recreate-deployment fba4c994-5e8a-4adf-b08e-4f16772d6230 4041514 2 2020-07-25 11:45:29 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-25 11:45:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-25 11:45:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b22698  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-25 11:45:33 +0000 UTC,LastTransitionTime:2020-07-25 11:45:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-07-25 11:45:33 +0000 UTC,LastTransitionTime:2020-07-25 11:45:29 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jul 25 11:45:33.969: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-620 /apis/apps/v1/namespaces/deployment-620/replicasets/test-recreate-deployment-d5667d9c7 39e286a8-7a83-4e24-8291-1b749e89985f 4041511 1 2020-07-25 11:45:33 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment fba4c994-5e8a-4adf-b08e-4f16772d6230 0xc004b22c10 0xc004b22c11}] []  [{kube-controller-manager Update apps/v1 2020-07-25 11:45:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 98 97 52 99 57 57 52 45 53 101 56 97 45 52 97 100 102 45 98 48 56 101 45 52 102 49 54 55 55 50 100 54 50 51 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b22c88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 25 11:45:33.969: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul 25 11:45:33.969: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-620 /apis/apps/v1/namespaces/deployment-620/replicasets/test-recreate-deployment-74d98b5f7c 6f489ef3-dc76-41a2-b398-688ebcdfae31 4041501 2 2020-07-25 11:45:29 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment fba4c994-5e8a-4adf-b08e-4f16772d6230 0xc004b22b17 0xc004b22b18}] []  [{kube-controller-manager Update apps/v1 2020-07-25 11:45:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 98 97 52 99 57 57 52 45 53 101 56 97 45 52 97 100 102 45 98 48 56 101 45 52 102 49 54 55 55 50 100 54 50 51 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b22ba8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 25 11:45:33.977: INFO: Pod "test-recreate-deployment-d5667d9c7-6v6fc" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-6v6fc test-recreate-deployment-d5667d9c7- deployment-620 /api/v1/namespaces/deployment-620/pods/test-recreate-deployment-d5667d9c7-6v6fc 2e9d6d9d-fa15-4ad8-a937-f4204e5ad2b2 4041512 0 2020-07-25 11:45:33 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 39e286a8-7a83-4e24-8291-1b749e89985f 0xc004b23150 0xc004b23151}] []  [{kube-controller-manager Update v1 2020-07-25 11:45:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 57 101 50 56 54 97 56 45 55 97 56 51 45 52 101 50 52 45 56 50 57 49 45 49 98 55 52 57 101 56 57 57 56 53 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-25 11:45:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qw2hv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qw2hv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qw2hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:45:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:45:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:45:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-25 11:45:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-25 11:45:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:45:33.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-620" for this suite.

• [SLOW TEST:5.149 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":250,"skipped":4325,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:45:33.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Jul 25 11:45:34.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5972'
Jul 25 11:45:34.598: INFO: stderr: ""
Jul 25 11:45:34.598: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 25 11:45:34.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972'
Jul 25 11:45:35.032: INFO: stderr: ""
Jul 25 11:45:35.032: INFO: stdout: "update-demo-nautilus-mk25b update-demo-nautilus-q57jr "
Jul 25 11:45:35.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mk25b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972'
Jul 25 11:45:35.556: INFO: stderr: ""
Jul 25 11:45:35.556: INFO: stdout: ""
Jul 25 11:45:35.556: INFO: update-demo-nautilus-mk25b is created but not running
Jul 25 11:45:40.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5972'
Jul 25 11:45:40.656: INFO: stderr: ""
Jul 25 11:45:40.656: INFO: stdout: "update-demo-nautilus-mk25b update-demo-nautilus-q57jr "
Jul 25 11:45:40.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mk25b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972'
Jul 25 11:45:40.745: INFO: stderr: ""
Jul 25 11:45:40.745: INFO: stdout: "true"
Jul 25 11:45:40.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mk25b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5972'
Jul 25 11:45:40.835: INFO: stderr: ""
Jul 25 11:45:40.835: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 25 11:45:40.835: INFO: validating pod update-demo-nautilus-mk25b
Jul 25 11:45:40.838: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 25 11:45:40.838: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 25 11:45:40.838: INFO: update-demo-nautilus-mk25b is verified up and running
Jul 25 11:45:40.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q57jr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5972'
Jul 25 11:45:40.928: INFO: stderr: ""
Jul 25 11:45:40.928: INFO: stdout: "true"
Jul 25 11:45:40.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q57jr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5972'
Jul 25 11:45:41.023: INFO: stderr: ""
Jul 25 11:45:41.023: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 25 11:45:41.023: INFO: validating pod update-demo-nautilus-q57jr
Jul 25 11:45:41.027: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 25 11:45:41.027: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 25 11:45:41.027: INFO: update-demo-nautilus-q57jr is verified up and running
STEP: using delete to clean up resources
Jul 25 11:45:41.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5972'
Jul 25 11:45:41.125: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 25 11:45:41.126: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 25 11:45:41.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5972'
Jul 25 11:45:41.218: INFO: stderr: "No resources found in kubectl-5972 namespace.\n"
Jul 25 11:45:41.218: INFO: stdout: ""
Jul 25 11:45:41.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5972 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 25 11:45:41.309: INFO: stderr: ""
Jul 25 11:45:41.309: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:45:41.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5972" for this suite.

• [SLOW TEST:7.381 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":251,"skipped":4345,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:45:41.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-657.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-657.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-657.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 11:45:49.534: INFO: DNS probes using dns-test-f897020d-07ba-4c13-bc88-56553ea45793 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-657.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-657.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-657.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 11:45:57.666: INFO: File wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local from pod  dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 25 11:45:57.752: INFO: File jessie_udp@dns-test-service-3.dns-657.svc.cluster.local from pod  dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 25 11:45:57.752: INFO: Lookups using dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 failed for: [wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local jessie_udp@dns-test-service-3.dns-657.svc.cluster.local]

Jul 25 11:46:02.757: INFO: File wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local from pod  dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 25 11:46:02.760: INFO: File jessie_udp@dns-test-service-3.dns-657.svc.cluster.local from pod  dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 25 11:46:02.760: INFO: Lookups using dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 failed for: [wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local jessie_udp@dns-test-service-3.dns-657.svc.cluster.local]

Jul 25 11:46:07.757: INFO: File wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local from pod  dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 25 11:46:07.760: INFO: File jessie_udp@dns-test-service-3.dns-657.svc.cluster.local from pod  dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 25 11:46:07.760: INFO: Lookups using dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 failed for: [wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local jessie_udp@dns-test-service-3.dns-657.svc.cluster.local]

Jul 25 11:46:12.756: INFO: File wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local from pod  dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 25 11:46:12.760: INFO: File jessie_udp@dns-test-service-3.dns-657.svc.cluster.local from pod  dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 25 11:46:12.760: INFO: Lookups using dns-657/dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 failed for: [wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local jessie_udp@dns-test-service-3.dns-657.svc.cluster.local]

Jul 25 11:46:17.759: INFO: DNS probes using dns-test-87d9df28-8357-4d73-806c-2b095cc80d31 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-657.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-657.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-657.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-657.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 11:46:24.496: INFO: DNS probes using dns-test-29dad56a-4e18-4c41-9003-d94e2ccd5036 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:46:24.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-657" for this suite.

• [SLOW TEST:43.215 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":252,"skipped":4347,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:46:24.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-852 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-852;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-852 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-852;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-852.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-852.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-852.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-852.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-852.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-852.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-852.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-852.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-852.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-852.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-852.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-852.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.225.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.225.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.225.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.225.251_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-852 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-852;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-852 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-852;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-852.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-852.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-852.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-852.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-852.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-852.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-852.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-852.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-852.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-852.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-852.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-852.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-852.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.225.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.225.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.225.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.225.251_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 11:46:35.140: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.143: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.146: INFO: Unable to read wheezy_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.150: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.153: INFO: Unable to read wheezy_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.157: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.160: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.179: INFO: Unable to read jessie_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.182: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.185: INFO: Unable to read jessie_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.187: INFO: Unable to read jessie_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.190: INFO: Unable to read jessie_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.193: INFO: Unable to read jessie_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.196: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.199: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:35.217: INFO: Lookups using dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-852 wheezy_tcp@dns-test-service.dns-852 wheezy_udp@dns-test-service.dns-852.svc wheezy_tcp@dns-test-service.dns-852.svc wheezy_udp@_http._tcp.dns-test-service.dns-852.svc wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-852 jessie_tcp@dns-test-service.dns-852 jessie_udp@dns-test-service.dns-852.svc jessie_tcp@dns-test-service.dns-852.svc jessie_udp@_http._tcp.dns-test-service.dns-852.svc jessie_tcp@_http._tcp.dns-test-service.dns-852.svc]

Jul 25 11:46:40.320: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.323: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.374: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.383: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.531: INFO: Unable to read jessie_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.534: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.537: INFO: Unable to read jessie_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.540: INFO: Unable to read jessie_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.542: INFO: Unable to read jessie_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.544: INFO: Unable to read jessie_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.547: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.549: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:40.567: INFO: Lookups using dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-852 wheezy_tcp@dns-test-service.dns-852 wheezy_udp@dns-test-service.dns-852.svc wheezy_tcp@dns-test-service.dns-852.svc wheezy_udp@_http._tcp.dns-test-service.dns-852.svc wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-852 jessie_tcp@dns-test-service.dns-852 jessie_udp@dns-test-service.dns-852.svc jessie_tcp@dns-test-service.dns-852.svc jessie_udp@_http._tcp.dns-test-service.dns-852.svc jessie_tcp@_http._tcp.dns-test-service.dns-852.svc]

Jul 25 11:46:45.222: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.225: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.228: INFO: Unable to read wheezy_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.231: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.234: INFO: Unable to read wheezy_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.236: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.239: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.242: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.261: INFO: Unable to read jessie_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.264: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.267: INFO: Unable to read jessie_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.270: INFO: Unable to read jessie_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.273: INFO: Unable to read jessie_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.278: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.281: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:45.303: INFO: Lookups using dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-852 wheezy_tcp@dns-test-service.dns-852 wheezy_udp@dns-test-service.dns-852.svc wheezy_tcp@dns-test-service.dns-852.svc wheezy_udp@_http._tcp.dns-test-service.dns-852.svc wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-852 jessie_tcp@dns-test-service.dns-852 jessie_udp@dns-test-service.dns-852.svc jessie_tcp@dns-test-service.dns-852.svc jessie_udp@_http._tcp.dns-test-service.dns-852.svc jessie_tcp@_http._tcp.dns-test-service.dns-852.svc]

Jul 25 11:46:50.223: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.227: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.232: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.235: INFO: Unable to read wheezy_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.238: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.242: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.244: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.284: INFO: Unable to read jessie_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.287: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.290: INFO: Unable to read jessie_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.292: INFO: Unable to read jessie_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.295: INFO: Unable to read jessie_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.297: INFO: Unable to read jessie_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.300: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.303: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:50.320: INFO: Lookups using dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-852 wheezy_tcp@dns-test-service.dns-852 wheezy_udp@dns-test-service.dns-852.svc wheezy_tcp@dns-test-service.dns-852.svc wheezy_udp@_http._tcp.dns-test-service.dns-852.svc wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-852 jessie_tcp@dns-test-service.dns-852 jessie_udp@dns-test-service.dns-852.svc jessie_tcp@dns-test-service.dns-852.svc jessie_udp@_http._tcp.dns-test-service.dns-852.svc jessie_tcp@_http._tcp.dns-test-service.dns-852.svc]

Jul 25 11:46:55.223: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.227: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.231: INFO: Unable to read wheezy_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.234: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.238: INFO: Unable to read wheezy_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.242: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.245: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.248: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.269: INFO: Unable to read jessie_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.273: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.276: INFO: Unable to read jessie_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.279: INFO: Unable to read jessie_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.283: INFO: Unable to read jessie_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.287: INFO: Unable to read jessie_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.290: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.293: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:46:55.309: INFO: Lookups using dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-852 wheezy_tcp@dns-test-service.dns-852 wheezy_udp@dns-test-service.dns-852.svc wheezy_tcp@dns-test-service.dns-852.svc wheezy_udp@_http._tcp.dns-test-service.dns-852.svc wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-852 jessie_tcp@dns-test-service.dns-852 jessie_udp@dns-test-service.dns-852.svc jessie_tcp@dns-test-service.dns-852.svc jessie_udp@_http._tcp.dns-test-service.dns-852.svc jessie_tcp@_http._tcp.dns-test-service.dns-852.svc]

Jul 25 11:47:00.223: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.226: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.233: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.236: INFO: Unable to read wheezy_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.239: INFO: Unable to read wheezy_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.242: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.245: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.268: INFO: Unable to read jessie_udp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.271: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.274: INFO: Unable to read jessie_udp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.277: INFO: Unable to read jessie_tcp@dns-test-service.dns-852 from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.280: INFO: Unable to read jessie_udp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.283: INFO: Unable to read jessie_tcp@dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.286: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.290: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-852.svc from pod dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3: the server could not find the requested resource (get pods dns-test-3ca60642-2949-4393-a152-359e812274c3)
Jul 25 11:47:00.307: INFO: Lookups using dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-852 wheezy_tcp@dns-test-service.dns-852 wheezy_udp@dns-test-service.dns-852.svc wheezy_tcp@dns-test-service.dns-852.svc wheezy_udp@_http._tcp.dns-test-service.dns-852.svc wheezy_tcp@_http._tcp.dns-test-service.dns-852.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-852 jessie_tcp@dns-test-service.dns-852 jessie_udp@dns-test-service.dns-852.svc jessie_tcp@dns-test-service.dns-852.svc jessie_udp@_http._tcp.dns-test-service.dns-852.svc jessie_tcp@_http._tcp.dns-test-service.dns-852.svc]

Jul 25 11:47:05.305: INFO: DNS probes using dns-852/dns-test-3ca60642-2949-4393-a152-359e812274c3 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:47:06.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-852" for this suite.

• [SLOW TEST:41.477 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":253,"skipped":4371,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:47:06.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:47:06.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1" in namespace "downward-api-1383" to be "Succeeded or Failed"
Jul 25 11:47:06.195: INFO: Pod "downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.622504ms
Jul 25 11:47:08.415: INFO: Pod "downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239889725s
Jul 25 11:47:10.420: INFO: Pod "downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.244573758s
Jul 25 11:47:12.425: INFO: Pod "downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248985374s
STEP: Saw pod success
Jul 25 11:47:12.425: INFO: Pod "downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1" satisfied condition "Succeeded or Failed"
Jul 25 11:47:12.428: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1 container client-container: 
STEP: delete the pod
Jul 25 11:47:12.507: INFO: Waiting for pod downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1 to disappear
Jul 25 11:47:12.519: INFO: Pod downwardapi-volume-a60b281e-d662-40c6-943d-6b79321ad3f1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:47:12.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1383" for this suite.

• [SLOW TEST:6.469 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4379,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:47:12.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:47:12.642: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90c198c1-fc21-4302-bc34-287526b9dcda" in namespace "projected-1528" to be "Succeeded or Failed"
Jul 25 11:47:12.652: INFO: Pod "downwardapi-volume-90c198c1-fc21-4302-bc34-287526b9dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 9.332825ms
Jul 25 11:47:14.656: INFO: Pod "downwardapi-volume-90c198c1-fc21-4302-bc34-287526b9dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013721967s
Jul 25 11:47:16.661: INFO: Pod "downwardapi-volume-90c198c1-fc21-4302-bc34-287526b9dcda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018239895s
STEP: Saw pod success
Jul 25 11:47:16.661: INFO: Pod "downwardapi-volume-90c198c1-fc21-4302-bc34-287526b9dcda" satisfied condition "Succeeded or Failed"
Jul 25 11:47:16.664: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-90c198c1-fc21-4302-bc34-287526b9dcda container client-container: 
STEP: delete the pod
Jul 25 11:47:16.689: INFO: Waiting for pod downwardapi-volume-90c198c1-fc21-4302-bc34-287526b9dcda to disappear
Jul 25 11:47:16.699: INFO: Pod downwardapi-volume-90c198c1-fc21-4302-bc34-287526b9dcda no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:47:16.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1528" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4380,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:47:16.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6373.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6373.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6373.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6373.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6373.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6373.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 74.188.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.188.74_udp@PTR;check="$$(dig +tcp +noall +answer +search 74.188.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.188.74_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6373.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6373.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6373.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6373.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6373.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6373.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6373.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 74.188.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.188.74_udp@PTR;check="$$(dig +tcp +noall +answer +search 74.188.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.188.74_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 25 11:47:22.957: INFO: Unable to read wheezy_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:22.960: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:22.964: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:22.967: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:22.989: INFO: Unable to read jessie_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:22.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:22.995: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:22.998: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:23.017: INFO: Lookups using dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660 failed for: [wheezy_udp@dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_udp@dns-test-service.dns-6373.svc.cluster.local jessie_tcp@dns-test-service.dns-6373.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local]

Jul 25 11:47:28.033: INFO: Unable to read wheezy_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:28.037: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:28.040: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:28.043: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:28.073: INFO: Unable to read jessie_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:28.077: INFO: Unable to read jessie_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:28.079: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:28.082: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:28.105: INFO: Lookups using dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660 failed for: [wheezy_udp@dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_udp@dns-test-service.dns-6373.svc.cluster.local jessie_tcp@dns-test-service.dns-6373.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local]

Jul 25 11:47:33.022: INFO: Unable to read wheezy_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:33.027: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:33.031: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:33.034: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:33.050: INFO: Unable to read jessie_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:33.052: INFO: Unable to read jessie_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:33.055: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:33.057: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:33.075: INFO: Lookups using dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660 failed for: [wheezy_udp@dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_udp@dns-test-service.dns-6373.svc.cluster.local jessie_tcp@dns-test-service.dns-6373.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local]

Jul 25 11:47:38.022: INFO: Unable to read wheezy_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:38.026: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:38.030: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:38.034: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:38.053: INFO: Unable to read jessie_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:38.056: INFO: Unable to read jessie_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:38.059: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:38.062: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:38.086: INFO: Lookups using dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660 failed for: [wheezy_udp@dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_udp@dns-test-service.dns-6373.svc.cluster.local jessie_tcp@dns-test-service.dns-6373.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local]

Jul 25 11:47:43.027: INFO: Unable to read wheezy_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:43.030: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:43.034: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:43.036: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:43.060: INFO: Unable to read jessie_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:43.063: INFO: Unable to read jessie_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:43.065: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:43.068: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:43.085: INFO: Lookups using dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660 failed for: [wheezy_udp@dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_udp@dns-test-service.dns-6373.svc.cluster.local jessie_tcp@dns-test-service.dns-6373.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local]

Jul 25 11:47:48.021: INFO: Unable to read wheezy_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:48.024: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:48.027: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:48.031: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:48.064: INFO: Unable to read jessie_udp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:48.068: INFO: Unable to read jessie_tcp@dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:48.071: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:48.075: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local from pod dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660: the server could not find the requested resource (get pods dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660)
Jul 25 11:47:48.093: INFO: Lookups using dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660 failed for: [wheezy_udp@dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@dns-test-service.dns-6373.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_udp@dns-test-service.dns-6373.svc.cluster.local jessie_tcp@dns-test-service.dns-6373.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6373.svc.cluster.local]

Jul 25 11:47:53.123: INFO: DNS probes using dns-6373/dns-test-adb63a90-7f81-4ff6-8124-39429cc0e660 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:47:53.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6373" for this suite.

• [SLOW TEST:36.982 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":256,"skipped":4392,"failed":0}
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:47:53.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5041
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-5041
I0725 11:47:53.910768       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5041, replica count: 2
I0725 11:47:56.961239       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0725 11:47:59.961501       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 25 11:47:59.961: INFO: Creating new exec pod
Jul 25 11:48:04.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5041 execpodf9w5w -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 25 11:48:05.163: INFO: stderr: "I0725 11:48:05.097993    3695 log.go:172] (0xc000a46dc0) (0xc000a6c3c0) Create stream\nI0725 11:48:05.098048    3695 log.go:172] (0xc000a46dc0) (0xc000a6c3c0) Stream added, broadcasting: 1\nI0725 11:48:05.101884    3695 log.go:172] (0xc000a46dc0) Reply frame received for 1\nI0725 11:48:05.101926    3695 log.go:172] (0xc000a46dc0) (0xc000556320) Create stream\nI0725 11:48:05.101940    3695 log.go:172] (0xc000a46dc0) (0xc000556320) Stream added, broadcasting: 3\nI0725 11:48:05.102619    3695 log.go:172] (0xc000a46dc0) Reply frame received for 3\nI0725 11:48:05.102644    3695 log.go:172] (0xc000a46dc0) (0xc000a28000) Create stream\nI0725 11:48:05.102650    3695 log.go:172] (0xc000a46dc0) (0xc000a28000) Stream added, broadcasting: 5\nI0725 11:48:05.103390    3695 log.go:172] (0xc000a46dc0) Reply frame received for 5\nI0725 11:48:05.154431    3695 log.go:172] (0xc000a46dc0) Data frame received for 5\nI0725 11:48:05.154456    3695 log.go:172] (0xc000a28000) (5) Data frame handling\nI0725 11:48:05.154471    3695 log.go:172] (0xc000a28000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0725 11:48:05.155317    3695 log.go:172] (0xc000a46dc0) Data frame received for 5\nI0725 11:48:05.155333    3695 log.go:172] (0xc000a28000) (5) Data frame handling\nI0725 11:48:05.155345    3695 log.go:172] (0xc000a28000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0725 11:48:05.155570    3695 log.go:172] (0xc000a46dc0) Data frame received for 3\nI0725 11:48:05.155583    3695 log.go:172] (0xc000556320) (3) Data frame handling\nI0725 11:48:05.156040    3695 log.go:172] (0xc000a46dc0) Data frame received for 5\nI0725 11:48:05.156065    3695 log.go:172] (0xc000a28000) (5) Data frame handling\nI0725 11:48:05.158464    3695 log.go:172] (0xc000a46dc0) Data frame received for 1\nI0725 11:48:05.158486    3695 log.go:172] (0xc000a6c3c0) (1) Data frame handling\nI0725 11:48:05.158505    3695 log.go:172] (0xc000a6c3c0) (1) Data frame sent\nI0725 11:48:05.158533    3695 log.go:172] (0xc000a46dc0) (0xc000a6c3c0) Stream removed, broadcasting: 1\nI0725 11:48:05.158572    3695 log.go:172] (0xc000a46dc0) Go away received\nI0725 11:48:05.159056    3695 log.go:172] (0xc000a46dc0) (0xc000a6c3c0) Stream removed, broadcasting: 1\nI0725 11:48:05.159088    3695 log.go:172] (0xc000a46dc0) (0xc000556320) Stream removed, broadcasting: 3\nI0725 11:48:05.159100    3695 log.go:172] (0xc000a46dc0) (0xc000a28000) Stream removed, broadcasting: 5\n"
Jul 25 11:48:05.163: INFO: stdout: ""
Jul 25 11:48:05.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5041 execpodf9w5w -- /bin/sh -x -c nc -zv -t -w 2 10.101.216.102 80'
Jul 25 11:48:05.398: INFO: stderr: "I0725 11:48:05.337453    3718 log.go:172] (0xc0000e8e70) (0xc0008640a0) Create stream\nI0725 11:48:05.337527    3718 log.go:172] (0xc0000e8e70) (0xc0008640a0) Stream added, broadcasting: 1\nI0725 11:48:05.339813    3718 log.go:172] (0xc0000e8e70) Reply frame received for 1\nI0725 11:48:05.339850    3718 log.go:172] (0xc0000e8e70) (0xc00044b400) Create stream\nI0725 11:48:05.339862    3718 log.go:172] (0xc0000e8e70) (0xc00044b400) Stream added, broadcasting: 3\nI0725 11:48:05.340654    3718 log.go:172] (0xc0000e8e70) Reply frame received for 3\nI0725 11:48:05.340710    3718 log.go:172] (0xc0000e8e70) (0xc000364000) Create stream\nI0725 11:48:05.340829    3718 log.go:172] (0xc0000e8e70) (0xc000364000) Stream added, broadcasting: 5\nI0725 11:48:05.341693    3718 log.go:172] (0xc0000e8e70) Reply frame received for 5\nI0725 11:48:05.391627    3718 log.go:172] (0xc0000e8e70) Data frame received for 5\nI0725 11:48:05.391662    3718 log.go:172] (0xc000364000) (5) Data frame handling\nI0725 11:48:05.391669    3718 log.go:172] (0xc000364000) (5) Data frame sent\nI0725 11:48:05.391676    3718 log.go:172] (0xc0000e8e70) Data frame received for 5\nI0725 11:48:05.391683    3718 log.go:172] (0xc000364000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.216.102 80\nConnection to 10.101.216.102 80 port [tcp/http] succeeded!\nI0725 11:48:05.391703    3718 log.go:172] (0xc0000e8e70) Data frame received for 3\nI0725 11:48:05.391708    3718 log.go:172] (0xc00044b400) (3) Data frame handling\nI0725 11:48:05.393059    3718 log.go:172] (0xc0000e8e70) Data frame received for 1\nI0725 11:48:05.393082    3718 log.go:172] (0xc0008640a0) (1) Data frame handling\nI0725 11:48:05.393089    3718 log.go:172] (0xc0008640a0) (1) Data frame sent\nI0725 11:48:05.393097    3718 log.go:172] (0xc0000e8e70) (0xc0008640a0) Stream removed, broadcasting: 1\nI0725 11:48:05.393211    3718 log.go:172] (0xc0000e8e70) Go away received\nI0725 11:48:05.393340    3718 log.go:172] (0xc0000e8e70) (0xc0008640a0) Stream removed, broadcasting: 1\nI0725 11:48:05.393353    3718 log.go:172] (0xc0000e8e70) (0xc00044b400) Stream removed, broadcasting: 3\nI0725 11:48:05.393359    3718 log.go:172] (0xc0000e8e70) (0xc000364000) Stream removed, broadcasting: 5\n"
Jul 25 11:48:05.398: INFO: stdout: ""
Jul 25 11:48:05.398: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:48:05.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5041" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.777 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":257,"skipped":4392,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:48:05.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jul 25 11:48:05.526: INFO: >>> kubeConfig: /root/.kube/config
Jul 25 11:48:08.453: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:48:19.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8084" for this suite.

• [SLOW TEST:13.650 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":258,"skipped":4417,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:48:19.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-f7dd8f8b-9dec-4395-9aac-c052ab557999
STEP: Creating secret with name secret-projected-all-test-volume-810c83b5-8143-4ca3-be66-6fb4ec90ebd0
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 25 11:48:19.255: INFO: Waiting up to 5m0s for pod "projected-volume-8c42bfe6-030b-4450-96ad-245b500c2ea8" in namespace "projected-7490" to be "Succeeded or Failed"
Jul 25 11:48:19.274: INFO: Pod "projected-volume-8c42bfe6-030b-4450-96ad-245b500c2ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.430487ms
Jul 25 11:48:21.281: INFO: Pod "projected-volume-8c42bfe6-030b-4450-96ad-245b500c2ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025726419s
Jul 25 11:48:23.286: INFO: Pod "projected-volume-8c42bfe6-030b-4450-96ad-245b500c2ea8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030597815s
STEP: Saw pod success
Jul 25 11:48:23.286: INFO: Pod "projected-volume-8c42bfe6-030b-4450-96ad-245b500c2ea8" satisfied condition "Succeeded or Failed"
Jul 25 11:48:23.289: INFO: Trying to get logs from node kali-worker2 pod projected-volume-8c42bfe6-030b-4450-96ad-245b500c2ea8 container projected-all-volume-test: 
STEP: delete the pod
Jul 25 11:48:23.308: INFO: Waiting for pod projected-volume-8c42bfe6-030b-4450-96ad-245b500c2ea8 to disappear
Jul 25 11:48:23.345: INFO: Pod projected-volume-8c42bfe6-030b-4450-96ad-245b500c2ea8 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:48:23.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7490" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4432,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:48:23.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-0999c5ee-d811-466a-811d-f6b4aea9ba13 in namespace container-probe-1815
Jul 25 11:48:27.480: INFO: Started pod test-webserver-0999c5ee-d811-466a-811d-f6b4aea9ba13 in namespace container-probe-1815
STEP: checking the pod's current state and verifying that restartCount is present
Jul 25 11:48:27.482: INFO: Initial restart count of pod test-webserver-0999c5ee-d811-466a-811d-f6b4aea9ba13 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:52:29.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1815" for this suite.

• [SLOW TEST:245.792 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4443,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:52:29.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:52:29.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3" in namespace "downward-api-119" to be "Succeeded or Failed"
Jul 25 11:52:29.461: INFO: Pod "downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3": Phase="Pending", Reason="", readiness=false. Elapsed: 161.775935ms
Jul 25 11:52:31.465: INFO: Pod "downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16612635s
Jul 25 11:52:33.469: INFO: Pod "downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3": Phase="Running", Reason="", readiness=true. Elapsed: 4.169913344s
Jul 25 11:52:35.473: INFO: Pod "downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174111938s
STEP: Saw pod success
Jul 25 11:52:35.473: INFO: Pod "downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3" satisfied condition "Succeeded or Failed"
Jul 25 11:52:35.476: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3 container client-container: 
STEP: delete the pod
Jul 25 11:52:35.523: INFO: Waiting for pod downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3 to disappear
Jul 25 11:52:35.531: INFO: Pod downwardapi-volume-678c83d2-2def-42b6-a955-8048f85f0be3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:52:35.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-119" for this suite.

• [SLOW TEST:6.422 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4447,"failed":0}
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:52:35.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-258/configmap-test-262db00c-243c-4ef7-aa69-7dc2d29ea3e1
STEP: Creating a pod to test consume configMaps
Jul 25 11:52:35.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-66cee746-646e-4fbd-8907-1860ce43d41f" in namespace "configmap-258" to be "Succeeded or Failed"
Jul 25 11:52:35.639: INFO: Pod "pod-configmaps-66cee746-646e-4fbd-8907-1860ce43d41f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378316ms
Jul 25 11:52:37.643: INFO: Pod "pod-configmaps-66cee746-646e-4fbd-8907-1860ce43d41f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008582325s
Jul 25 11:52:39.647: INFO: Pod "pod-configmaps-66cee746-646e-4fbd-8907-1860ce43d41f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012149026s
STEP: Saw pod success
Jul 25 11:52:39.647: INFO: Pod "pod-configmaps-66cee746-646e-4fbd-8907-1860ce43d41f" satisfied condition "Succeeded or Failed"
Jul 25 11:52:39.649: INFO: Trying to get logs from node kali-worker pod pod-configmaps-66cee746-646e-4fbd-8907-1860ce43d41f container env-test: 
STEP: delete the pod
Jul 25 11:52:39.666: INFO: Waiting for pod pod-configmaps-66cee746-646e-4fbd-8907-1860ce43d41f to disappear
Jul 25 11:52:39.680: INFO: Pod pod-configmaps-66cee746-646e-4fbd-8907-1860ce43d41f no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:52:39.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-258" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4448,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:52:39.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 25 11:52:40.574: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 25 11:52:42.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274760, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274760, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274760, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274760, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 25 11:52:44.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274760, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274760, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274760, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731274760, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 25 11:52:47.964: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:00.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3395" for this suite.
STEP: Destroying namespace "webhook-3395-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.939 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":263,"skipped":4459,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:00.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:53:00.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:04.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6560" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4467,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:04.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:53:04.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 25 11:53:07.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4516 create -f -'
Jul 25 11:53:11.088: INFO: stderr: ""
Jul 25 11:53:11.088: INFO: stdout: "e2e-test-crd-publish-openapi-5541-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul 25 11:53:11.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4516 delete e2e-test-crd-publish-openapi-5541-crds test-cr'
Jul 25 11:53:11.212: INFO: stderr: ""
Jul 25 11:53:11.212: INFO: stdout: "e2e-test-crd-publish-openapi-5541-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jul 25 11:53:11.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4516 apply -f -'
Jul 25 11:53:11.455: INFO: stderr: ""
Jul 25 11:53:11.455: INFO: stdout: "e2e-test-crd-publish-openapi-5541-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul 25 11:53:11.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4516 delete e2e-test-crd-publish-openapi-5541-crds test-cr'
Jul 25 11:53:11.558: INFO: stderr: ""
Jul 25 11:53:11.558: INFO: stdout: "e2e-test-crd-publish-openapi-5541-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jul 25 11:53:11.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5541-crds'
Jul 25 11:53:11.803: INFO: stderr: ""
Jul 25 11:53:11.803: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5541-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:14.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4516" for this suite.

• [SLOW TEST:9.903 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":265,"skipped":4482,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:14.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:19.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8470" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4506,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:19.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0725 11:53:20.197765       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 25 11:53:20.197: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:20.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5303" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":267,"skipped":4520,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:20.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 25 11:53:25.596: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:25.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7355" for this suite.

• [SLOW TEST:5.416 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4541,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:25.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:25.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2582" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":269,"skipped":4556,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:25.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 25 11:53:25.845: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f30bda03-13c5-44b6-a0ff-c8ad513c2a22" in namespace "projected-7081" to be "Succeeded or Failed"
Jul 25 11:53:25.866: INFO: Pod "downwardapi-volume-f30bda03-13c5-44b6-a0ff-c8ad513c2a22": Phase="Pending", Reason="", readiness=false. Elapsed: 20.869118ms
Jul 25 11:53:27.934: INFO: Pod "downwardapi-volume-f30bda03-13c5-44b6-a0ff-c8ad513c2a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088410021s
Jul 25 11:53:29.938: INFO: Pod "downwardapi-volume-f30bda03-13c5-44b6-a0ff-c8ad513c2a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092358854s
STEP: Saw pod success
Jul 25 11:53:29.938: INFO: Pod "downwardapi-volume-f30bda03-13c5-44b6-a0ff-c8ad513c2a22" satisfied condition "Succeeded or Failed"
Jul 25 11:53:29.941: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f30bda03-13c5-44b6-a0ff-c8ad513c2a22 container client-container: 
STEP: delete the pod
Jul 25 11:53:29.979: INFO: Waiting for pod downwardapi-volume-f30bda03-13c5-44b6-a0ff-c8ad513c2a22 to disappear
Jul 25 11:53:29.986: INFO: Pod downwardapi-volume-f30bda03-13c5-44b6-a0ff-c8ad513c2a22 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:29.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7081" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4570,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:29.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-f5557065-a62f-4949-b970-f1708d0826ac
STEP: Creating secret with name s-test-opt-upd-c7bcec8d-8423-4345-95f9-416df74b3798
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f5557065-a62f-4949-b970-f1708d0826ac
STEP: Updating secret s-test-opt-upd-c7bcec8d-8423-4345-95f9-416df74b3798
STEP: Creating secret with name s-test-opt-create-8ed8b79b-960d-405b-9053-d3a8046ac3c0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:38.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-354" for this suite.

• [SLOW TEST:8.269 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4595,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:38.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 25 11:53:48.427: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 25 11:53:48.449: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 25 11:53:50.449: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 25 11:53:50.453: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 25 11:53:52.449: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 25 11:53:52.452: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:52.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8902" for this suite.

• [SLOW TEST:14.202 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4601,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:52.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 25 11:53:52.725: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"aa9851a3-ef53-4853-8fef-f8cebb7b366b", Controller:(*bool)(0xc0043013ea), BlockOwnerDeletion:(*bool)(0xc0043013eb)}}
Jul 25 11:53:52.758: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ea575ba0-0da0-4186-9298-8bbe63bc10f0", Controller:(*bool)(0xc004301612), BlockOwnerDeletion:(*bool)(0xc004301613)}}
Jul 25 11:53:52.782: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"dfa3a9d5-cba4-44d1-b949-4cc5ed872b45", Controller:(*bool)(0xc00424c2ba), BlockOwnerDeletion:(*bool)(0xc00424c2bb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:53:57.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1532" for this suite.

• [SLOW TEST:5.660 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":273,"skipped":4627,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:53:58.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul 25 11:54:03.378: INFO: Successfully updated pod "annotationupdate0ec18263-0111-493a-bf2d-f2d1324c559b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:54:07.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9052" for this suite.

• [SLOW TEST:9.310 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4644,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 25 11:54:07.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-e0c12716-8598-4b01-80c1-41d29f2843a7
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 25 11:54:07.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2387" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":275,"skipped":4706,"failed":0}
SSSSSSSSSSSJul 25 11:54:07.535: INFO: Running AfterSuite actions on all nodes
Jul 25 11:54:07.535: INFO: Running AfterSuite actions on node 1
Jul 25 11:54:07.535: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 4975.955 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS