sphinx.addnodesdocument)}( rawsource children]docutils.nodessection)}(hhh](h title)}(hHigh Level Architectureh]h TextHigh Level Architecture}(hhparenth _documenthsourceNlineNuba
attributes}(ids]classes]names]dupnames]backrefs]utagnamehhhhhh\/home/opnfv/slave_root/workspace/cntt-tox-ra2/doc/ref_arch/kubernetes/chapters/chapter03.rsthKubh)}(hhh](h)}(hIntroductionh]hIntroduction}(hh2hh0hhhNhNubah}(h!]h#]h%]h']h)]uh+hhh-hhhh,hKubh paragraph)}(hXA The Anuket Kubernetes Reference Architecture (RA) is intended to be an industry
standard independent Kubernetes reference architecture that is not tied to any
specific offering or distribution. No vendor-specific enhancements are required
in order to achieve conformance to the principles of Anuket specifications; conformance is achieved by
using upstream components or features that are developed by the open source
community. This allows operators to have a common Kubernetes-based architecture
that supports any conformant VNF or CNF deployed on it to operate as expected.
The purpose of this chapter is to outline all the components required to provide
Kubernetes in a consistent and reliable way. The specification of how to use
these components is detailed in Chapter 04 :ref:`chapters/chapter04:component level architecture`.h](hX
The Anuket Kubernetes Reference Architecture (RA) is intended to be an industry
standard independent Kubernetes reference architecture that is not tied to any
specific offering or distribution. No vendor-specific enhancements are required
in order to achieve conformance to the principles of Anuket specifications; conformance is achieved by
using upstream components or features that are developed by the open source
community. This allows operators to have a common Kubernetes-based architecture
that supports any conformant VNF or CNF deployed on it to operate as expected.
The purpose of this chapter is to outline all the components required to provide
Kubernetes in a consistent and reliable way. The specification of how to use
these components is detailed in Chapter 04 }(hX
The Anuket Kubernetes Reference Architecture (RA) is intended to be an industry
standard independent Kubernetes reference architecture that is not tied to any
specific offering or distribution. No vendor-specific enhancements are required
in order to achieve conformance to the principles of Anuket specifications; conformance is achieved by
using upstream components or features that are developed by the open source
community. This allows operators to have a common Kubernetes-based architecture
that supports any conformant VNF or CNF deployed on it to operate as expected.
The purpose of this chapter is to outline all the components required to provide
Kubernetes in a consistent and reliable way. The specification of how to use
these components is detailed in Chapter 04 hh@hhhNhNubh pending_xref)}(h6:ref:`chapters/chapter04:component level architecture`h]h inline)}(hhMh]h/chapters/chapter04:component level architecture}(hhhhQhhhNhNubah}(h!]h#](xrefstdstd-refeh%]h']h)]uh+hOhhKubah}(h!]h#]h%]h']h)]refdocchapters/chapter03 refdomainh\reftyperefrefexplicitrefwarn reftarget/chapters/chapter04:component level architectureuh+hIhh,hKhh@ubh.}(h.hh@hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhh-hhubh?)}(hX; Kubernetes is already a well documented and widely deployed Open Source project
managed by the Cloud Native Computing Foundation (CNCF). Full documentation of
the Kubernetes code and project can be found at
`https://kubernetes.io/docs/home/ `__. The
following chapters will only describe the specific features required by the Anuket
Reference Architecture, and how they would be expected to be implemented. For
any information related to standard Kubernetes features and capabilities, refer
back to the standard Kubernetes documentation.h](hKubernetes is already a well documented and widely deployed Open Source project
managed by the Cloud Native Computing Foundation (CNCF). Full documentation of
the Kubernetes code and project can be found at
}(hKubernetes is already a well documented and widely deployed Open Source project
managed by the Cloud Native Computing Foundation (CNCF). Full documentation of
the Kubernetes code and project can be found at
hh{hhhNhNubh reference)}(hG`https://kubernetes.io/docs/home/ `__h]h https://kubernetes.io/docs/home/}(h https://kubernetes.io/docs/home/hhhhhNhNubah}(h!]h#]h%]h']h)]namehrefuri https://kubernetes.io/docs/home/uh+hhh{ubhX% . The
following chapters will only describe the specific features required by the Anuket
Reference Architecture, and how they would be expected to be implemented. For
any information related to standard Kubernetes features and capabilities, refer
back to the standard Kubernetes documentation.}(hX% . The
following chapters will only describe the specific features required by the Anuket
Reference Architecture, and how they would be expected to be implemented. For
any information related to standard Kubernetes features and capabilities, refer
back to the standard Kubernetes documentation.hh{hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhh-hhubh?)}(hX While this reference architecture provides options for pluggable components such
as service mesh and other plugins that might be used, the focus of the
reference architecture is on the abstracted interfaces and features that are
required for telco type workload management and execution.h]hX While this reference architecture provides options for pluggable components such
as service mesh and other plugins that might be used, the focus of the
reference architecture is on the abstracted interfaces and features that are
required for telco type workload management and execution.}(hhhhhhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhh-hhubh?)}(hX Chapter 5 of the Reference Model (RM) describes the
hardware and software profiles that are
descriptions of the capabilities and features that the Cloud Infrastructure
provide to the workloads. The NFVI Software Profile figure below
depicts a high level view of the software profile features that apply to each
instance profile (Basic and High Performance). For more information on the
instance profiles please refer to :ref:`ref_model:chapters/chapter04:profiles`.h](hX Chapter 5 of the Reference Model (RM) describes the
hardware and software profiles that are
descriptions of the capabilities and features that the Cloud Infrastructure
provide to the workloads. The NFVI Software Profile figure below
depicts a high level view of the software profile features that apply to each
instance profile (Basic and High Performance). For more information on the
instance profiles please refer to }(hX Chapter 5 of the Reference Model (RM) describes the
hardware and software profiles that are
descriptions of the capabilities and features that the Cloud Infrastructure
provide to the workloads. The NFVI Software Profile figure below
depicts a high level view of the software profile features that apply to each
instance profile (Basic and High Performance). For more information on the
instance profiles please refer to hhhhhNhNubhJ)}(h,:ref:`ref_model:chapters/chapter04:profiles`h]hP)}(hhh]h%ref_model:chapters/chapter04:profiles}(hhhhhhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhhubah}(h!]h#]h%]h']h)]refdochh refdomainhȌreftyperefrefexplicitrefwarnhn%ref_model:chapters/chapter04:profilesuh+hIhh,hK hhubh.}(hhthhhhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hK hh-hhubh image)}(hu.. image:: ../../../ref_model/figures/RM-ch05-sw-profile.png
:alt: "Figure 5-3 (from RM): NFVI softwareprofiles"
h]h}(h!]h#]h%]h']h)]alt-"Figure 5-3 (from RM): NFVI softwareprofiles"uri.../../ref_model/figures/RM-ch05-sw-profile.png
candidates}*hsuh+hhh-hhhh,hNubh?)}(h/**Figure 5-3 (from RM):** NFVI softwareprofilesh](h strong)}(h**Figure 5-3 (from RM):**h]hFigure 5-3 (from RM):}(hhhhhhhNhNubah}(h!]h#]h%]h']h)]uh+hhhubh NFVI softwareprofiles}(h NFVI softwareprofileshhhhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hK,hh-hhubh?)}(hwIn addition, the RM Figure 5-4 (shown below) depicts the hardware profile features
that apply to each instance profile.h]hwIn addition, the RM Figure 5-4 (shown below) depicts the hardware profile features
that apply to each instance profile.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hK.hh-hhubh)}(h.. image:: ../../../ref_model/figures/RM_chap5_fig_5_4_HW_profile.png
:alt: "Figure 5-4 (from RM): NFVI hardwareprofiles and host associated capabilities"
h]h}(h!]h#]h%]h']h)]altN"Figure 5-4 (from RM): NFVI hardwareprofiles and host associated capabilities"uri7../../ref_model/figures/RM_chap5_fig_5_4_HW_profile.pngh}hj1 suh+hhh-hhhh,hNubh?)}(hP**Figure 5-4 (from RM):** NFVI hardwareprofiles and host associated capabilitiesh](h)}(h**Figure 5-4 (from RM):**h]hFigure 5-4 (from RM):}(hhhj7 hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj3 ubh7 NFVI hardwareprofiles and host associated capabilities}(h7 NFVI hardwareprofiles and host associated capabilitieshj3 hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hK5hh-hhubh?)}(hX The features and capabilities described in the software and hardware profiles
are considered throughout this RA, with the RA requirements traceability to the
RM requirements formally documented in
:ref:`chapters/chapter02:architecture requirements` of this RA.h](hThe features and capabilities described in the software and hardware profiles
are considered throughout this RA, with the RA requirements traceability to the
RM requirements formally documented in
}(hThe features and capabilities described in the software and hardware profiles
are considered throughout this RA, with the RA requirements traceability to the
RM requirements formally documented in
hjP hhhNhNubhJ)}(h3:ref:`chapters/chapter02:architecture requirements`h]hP)}(hj[ h]h,chapters/chapter02:architecture requirements}(hhhj] hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhjY ubah}(h!]h#]h%]h']h)]refdochh refdomainjg reftyperefrefexplicitrefwarnhn,chapters/chapter02:architecture requirementsuh+hIhh,hK7hjP ubh of this RA.}(h of this RA.hjP hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hK7hh-hhubeh}(h!]introductionah#]h%]introductionah']h)]uh+h
hhhhhh,hKubh)}(hhh](h)}(hInfrastructure Servicesh]hInfrastructure Services}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hK=ubh)}(hhh](h)}(hContainer Compute Servicesh]hContainer Compute Services}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hK@ubh?)}(hX` The primary interface between the Physical / Virtual Infrastructure and any
container-relevant components is the Kubernetes Node Operating System. This is
the OS within which the container runtime exists, and within which the
containers run (and therefore, the OS whose kernel is shared by the referenced
containers). This is shown in Figure 3-1 below.h]hX` The primary interface between the Physical / Virtual Infrastructure and any
container-relevant components is the Kubernetes Node Operating System. This is
the OS within which the container runtime exists, and within which the
containers run (and therefore, the OS whose kernel is shared by the referenced
containers). This is shown in Figure 3-1 below.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKBhj hhubh)}(h_.. image:: ../figures/ch03_hostOS.png
:alt: "Figure 3-1: Kubernetes Node Operating System"
h]h}(h!]h#]h%]h']h)]alt."Figure 3-1: Kubernetes Node Operating System"urifigures/ch03_hostOS.pngh}hj suh+hhj hhhh,hNubh?)}(h0**Figure 3-1:** Kubernetes Node Operating Systemh](h)}(h**Figure 3-1:**h]hFigure 3-1:}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj ubh! Kubernetes Node Operating System}(h! Kubernetes Node Operating Systemhj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKLhj hhubh?)}(hHThe Kubernetes Node OS (as with any OS) consists of two main components:h]hHThe Kubernetes Node OS (as with any OS) consists of two main components:}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKNhj hhubh bullet_list)}(hhh](h list_item)}(hKernel spaceh]h?)}(hj h]hKernel space}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKPhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(hUser space
h]h?)}(h
User spaceh]h
User space}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKQhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubeh}(h!]h#]h%]h']h)]bullet-uh+j hh,hKPhj hhubh?)}(hX The Kernel is the tightly controlled space that provides an API to applications
running in the user space (which usually have their own southbound interface in
an interpreter or libraries). Key containerisation capabilities such as Control
Groups (cgroups) and namespaces are kernel features, and are used and managed by
the container runtime in order to provide isolation between the user space
processes, which would also include the container itself as well as any
processes running within it. The security of the Kubernetes Node OS and its
relationship to the container and the applications running within the container
or containers is essential to the overall security posture of the entire system,
and must be appropriately secured to ensure processes running in one container
cannot escalate their privileges or otherwise affect processes running in an
adjacent container. An example and more details of this concept can be found in
:ref:`chapters/chapter06:api and feature testing requirements`.h](hX The Kernel is the tightly controlled space that provides an API to applications
running in the user space (which usually have their own southbound interface in
an interpreter or libraries). Key containerisation capabilities such as Control
Groups (cgroups) and namespaces are kernel features, and are used and managed by
the container runtime in order to provide isolation between the user space
processes, which would also include the container itself as well as any
processes running within it. The security of the Kubernetes Node OS and its
relationship to the container and the applications running within the container
or containers is essential to the overall security posture of the entire system,
and must be appropriately secured to ensure processes running in one container
cannot escalate their privileges or otherwise affect processes running in an
adjacent container. An example and more details of this concept can be found in
}(hX The Kernel is the tightly controlled space that provides an API to applications
running in the user space (which usually have their own southbound interface in
an interpreter or libraries). Key containerisation capabilities such as Control
Groups (cgroups) and namespaces are kernel features, and are used and managed by
the container runtime in order to provide isolation between the user space
processes, which would also include the container itself as well as any
processes running within it. The security of the Kubernetes Node OS and its
relationship to the container and the applications running within the container
or containers is essential to the overall security posture of the entire system,
and must be appropriately secured to ensure processes running in one container
cannot escalate their privileges or otherwise affect processes running in an
adjacent container. An example and more details of this concept can be found in
hj4 hhhNhNubhJ)}(h>:ref:`chapters/chapter06:api and feature testing requirements`h]hP)}(hj? h]h7chapters/chapter06:api and feature testing requirements}(hhhjA hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhj= ubah}(h!]h#]h%]h']h)]refdochh refdomainjK reftyperefrefexplicitrefwarnhn7chapters/chapter06:api and feature testing requirementsuh+hIhh,hKShj4 ubh.}(hhthj4 hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKShj hhubh?)}(hX_ It is important to note that the container runtime itself is also a set of
processes that run in user space, and therefore also interact with the kernel
via system calls. Many diagrams will show containers as running on top of the
runtime, or inside the runtime. More accurately, the containers themselves are
simply processes running within an OS, the container runtime is simply another
set of processes that are used to manage these containers (pull, run, delete,
etc.), and the kernel features required to provide the isolation mechanisms
(cgroups, namespaces, filesystems, etc.) between the components.h]hX_ It is important to note that the container runtime itself is also a set of
processes that run in user space, and therefore also interact with the kernel
via system calls. Many diagrams will show containers as running on top of the
runtime, or inside the runtime. More accurately, the containers themselves are
simply processes running within an OS, the container runtime is simply another
set of processes that are used to manage these containers (pull, run, delete,
etc.), and the kernel features required to provide the isolation mechanisms
(cgroups, namespaces, filesystems, etc.) between the components.}(hji hjg hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKahj hhubh)}(hhh](h)}(hContainer Runtime Servicesh]hContainer Runtime Services}(hjz hjx hhhNhNubah}(h!]h#]h%]h']h)]uh+hhju hhhh,hKkubh?)}(hX The Container Runtime is the component that runs within a Kubernetes Node
Operating System (OS) and manages the underlying OS functionality, such as
cgroups and namespaces (in Linux), in order to provide a service within which
container images can be executed and make use of the infrastructure resources
(compute, storage, networking and other I/O devices) abstracted by the Container
Host OS, based on API instructions from the kubelet.h]hX The Container Runtime is the component that runs within a Kubernetes Node
Operating System (OS) and manages the underlying OS functionality, such as
cgroups and namespaces (in Linux), in order to provide a service within which
container images can be executed and make use of the infrastructure resources
(compute, storage, networking and other I/O devices) abstracted by the Container
Host OS, based on API instructions from the kubelet.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKmhju hhubh?)}(hX There are a number of different container runtimes. The simplest form, low-level
container runtimes, just manage the OS capabilities such as cgroups and
namespaces, and then run commands from within those cgroups and namespaces. An
example of this type of runtime is runc, which underpins many of the
higher-level runtimes and is considered a reference implementation of the `Open
Container Initiative (OCI) runtime
spec `__. This specification
includes details on how an implementation (i.e. an actual container runtime such
as runc) must, for example, configure resource shares and limits (e.g., CPU,
Memory, IOPS) for the containers that Kubernetes (via the kubelet) schedules on
that host. This is important to ensure that the features and capabilities
described in :doc:`ref_model:chapters/chapter05` are
supported by this RA and delivered by any downstream Reference Implementations
(RIs) to the instance types defined in the RM.h](hXw There are a number of different container runtimes. The simplest form, low-level
container runtimes, just manage the OS capabilities such as cgroups and
namespaces, and then run commands from within those cgroups and namespaces. An
example of this type of runtime is runc, which underpins many of the
higher-level runtimes and is considered a reference implementation of the }(hXw There are a number of different container runtimes. The simplest form, low-level
container runtimes, just manage the OS capabilities such as cgroups and
namespaces, and then run commands from within those cgroups and namespaces. An
example of this type of runtime is runc, which underpins many of the
higher-level runtimes and is considered a reference implementation of the hj hhhNhNubh)}(ha`Open
Container Initiative (OCI) runtime
spec `__h]h,Open
Container Initiative (OCI) runtime
spec}(h,Open
Container Initiative (OCI) runtime
spechj hhhNhNubah}(h!]h#]h%]h']h)]name,Open Container Initiative (OCI) runtime spech.https://github.com/opencontainers/runtime-specuh+hhj ubhXZ . This specification
includes details on how an implementation (i.e. an actual container runtime such
as runc) must, for example, configure resource shares and limits (e.g., CPU,
Memory, IOPS) for the containers that Kubernetes (via the kubelet) schedules on
that host. This is important to ensure that the features and capabilities
described in }(hXZ . This specification
includes details on how an implementation (i.e. an actual container runtime such
as runc) must, for example, configure resource shares and limits (e.g., CPU,
Memory, IOPS) for the containers that Kubernetes (via the kubelet) schedules on
that host. This is important to ensure that the features and capabilities
described in hj hhhNhNubhJ)}(h#:doc:`ref_model:chapters/chapter05`h]hP)}(hj h]href_model:chapters/chapter05}(hhhj hhhNhNubah}(h!]h#](h[stdstd-doceh%]h']h)]uh+hOhj ubah}(h!]h#]h%]h']h)]refdochh refdomainj reftypedocrefexplicitrefwarnhnref_model:chapters/chapter05uh+hIhh,hKthj ubh are
supported by this RA and delivered by any downstream Reference Implementations
(RIs) to the instance types defined in the RM.}(h are
supported by this RA and delivered by any downstream Reference Implementations
(RIs) to the instance types defined in the RM.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKthju hhubh?)}(hX Where low-level runtimes are used for the execution of a container within an OS,
the more complex/complete high-level container runtimes are used for the general
management of container images - moving them to where they need to be executed,
unpacking them, and then passing them to the low-level runtime, which then
executes the container. These high-level runtimes also include a comprehensive
API that other applications (e.g., Kubernetes) can use to interact and manage the
containers. An example of this type of runtime is containerd, which provides the
features described above, before passing off the unpacked container image to
runc for execution.h]hX Where low-level runtimes are used for the execution of a container within an OS,
the more complex/complete high-level container runtimes are used for the general
management of container images - moving them to where they need to be executed,
unpacking them, and then passing them to the low-level runtime, which then
executes the container. These high-level runtimes also include a comprehensive
API that other applications (e.g., Kubernetes) can use to interact and manage the
containers. An example of this type of runtime is containerd, which provides the
features described above, before passing off the unpacked container image to
runc for execution.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hX For Kubernetes the important interface to consider for container management is
the `Kubernetes Container Runtime Interface
(CRI) `__.
This is an interface specification for any container runtime so that it is able
to integrate with the kubelet on a Kubernetes Node. The CRI decouples the
kubelet from the runtime that is running in the Host OS, meaning that the code
required to integrate kubelet with a container runtime is not part of the
kubelet itself (i.e., if a new container runtime is needed and it uses CRI, it
will work with kubelet). Examples of this type of runtime include containerd
(with CRI plugin) and cri-o, which is built specifically to work with
Kubernetes.h](hSFor Kubernetes the important interface to consider for container management is
the }(hSFor Kubernetes the important interface to consider for container management is
the hj hhhNhNubh)}(h`Kubernetes Container Runtime Interface
(CRI) `__h]h,Kubernetes Container Runtime Interface
(CRI)}(h,Kubernetes Container Runtime Interface
(CRI)hj hhhNhNubah}(h!]h#]h%]h']h)]name,Kubernetes Container Runtime Interface (CRI)hQhttps://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/uh+hhj ubhX" .
This is an interface specification for any container runtime so that it is able
to integrate with the kubelet on a Kubernetes Node. The CRI decouples the
kubelet from the runtime that is running in the Host OS, meaning that the code
required to integrate kubelet with a container runtime is not part of the
kubelet itself (i.e., if a new container runtime is needed and it uses CRI, it
will work with kubelet). Examples of this type of runtime include containerd
(with CRI plugin) and cri-o, which is built specifically to work with
Kubernetes.}(hX" .
This is an interface specification for any container runtime so that it is able
to integrate with the kubelet on a Kubernetes Node. The CRI decouples the
kubelet from the runtime that is running in the Host OS, meaning that the code
required to integrate kubelet with a container runtime is not part of the
kubelet itself (i.e., if a new container runtime is needed and it uses CRI, it
will work with kubelet). Examples of this type of runtime include containerd
(with CRI plugin) and cri-o, which is built specifically to work with
Kubernetes.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hTo fulfil ``req.inf.vir.01`` the architecture should support a container runtime
which provides the isolation of Operating System kernels.h](h
To fulfil }(h
To fulfil hj hhhNhNubh literal)}(h``req.inf.vir.01``h]hreq.inf.vir.01}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+j hj ubhn the architecture should support a container runtime
which provides the isolation of Operating System kernels.}(hn the architecture should support a container runtime
which provides the isolation of Operating System kernels.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hThe architecture must support a way to isolate the compute resources of the
infrastructure itself from the workloads compute resources.h]hThe architecture must support a way to isolate the compute resources of the
infrastructure itself from the workloads compute resources.}(hj9 hj7 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hThe basic semantics of Kubernetes, and the information found in manifests, defines the built-in
Kubernetes objects and their desired state.h]hThe basic semantics of Kubernetes, and the information found in manifests, defines the built-in
Kubernetes objects and their desired state.}(hjG hjE hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh table)}(hhh](h)}(hKubernetes built-in objectsh]hKubernetes built-in objects}(hjZ hjX hhhNhNubah}(h!]h#]h%]h']h)]uh+hhh,hKhjU ubh tgroup)}(hhh](h colspec)}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hjh ubjl )}(hhh]h}(h!]h#]h%]h']h)]jv KPuh+jk hjh ubh thead)}(hhh]h row)}(hhh](h entry)}(hhh]h?)}(hPod and workloadsh]hPod and workloads}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hDescriptionh]hDescription}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubah}(h!]h#]h%]h']h)]uh+j hjh ubh tbody)}(hhh](j )}(hhh](j )}(hhh]h?)}(h=`Pod `__h]h)}(hj h]hPod}(hPodhj hhhNhNubah}(h!]h#]h%]h']h)]namej h3https://kubernetes.io/docs/concepts/workloads/pods/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hwPod is a collection of containers that can run on a node. This resource is created by clients and scheduled onto
nodes.h]hwPod is a collection of containers that can run on a node. This resource is created by clients and scheduled onto
nodes.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hV`ReplicaSet `__h]h)}(hj h]h
ReplicaSet}(h
ReplicaSethj hhhNhNubah}(h!]h#]h%]h']h)]namej hEhttps://kubernetes.io/docs/concepts/workloads/controllers/replicaset/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hYReplicaSet ensures that a specified number of pod replicas are running at any given time.h]hYReplicaSet ensures that a specified number of pod replicas are running at any given time.}(hj9 hj7 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj4 ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hV`Deployment `__h]h)}(hjY h]h
Deployment}(h
Deploymenthj[ hhhNhNubah}(h!]h#]h%]h']h)]namejb hEhttps://kubernetes.io/docs/concepts/workloads/controllers/deployment/uh+hhjW ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhjT ubah}(h!]h#]h%]h']h)]uh+j hjQ ubj )}(hhh]h?)}(h@Deployment enables declarative updates for Pods and ReplicaSets.h]h@Deployment enables declarative updates for Pods and ReplicaSets.}(hj| hjz hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhjw ubah}(h!]h#]h%]h']h)]uh+j hjQ ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hT`DaemonSet `__h]h)}(hj h]h DaemonSet}(h DaemonSethj hhhNhNubah}(h!]h#]h%]h']h)]namej hDhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(h@A Daemon set ensures that the correct nodes run a copy of a Pod.h]h@A Daemon set ensures that the correct nodes run a copy of a Pod.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hH`Job `__h]h)}(hj h]hJob}(hJobhj hhhNhNubah}(h!]h#]h%]h']h)]namej h>https://kubernetes.io/docs/concepts/workloads/controllers/job/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hA Job represent a task, it creates one or more Pods and will continue to retry until the expected number of
successful completions is reached.h]hA Job represent a task, it creates one or more Pods and will continue to retry until the expected number of
successful completions is reached.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hR`CronJob `__h]h)}(hj" h]hCronJob}(hCronJobhj$ hhhNhNubah}(h!]h#]h%]h']h)]namej+ hDhttps://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hyA CronJob manages time-based Jobs, namely: once at a specified point in time and repeatedly at a specified point
in time.h]hyA CronJob manages time-based Jobs, namely: once at a specified point in time and repeatedly at a specified point
in time.}(hjE hjC hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj@ ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hX`StatefulSet `__h]h)}(hje h]hStatefulSet}(hStatefulSethjg hhhNhNubah}(h!]h#]h%]h']h)]namejn hFhttps://kubernetes.io/docs/concepts/workloads/controllers/statefulset/uh+hhjc ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj` ubah}(h!]h#]h%]h']h)]uh+j hj] ubj )}(hhh]h?)}(hmStatefulSet represents a set of pods with consistent identities. Identities are defined as: network, storage.h]hmStatefulSet represents a set of pods with consistent identities. Identities are defined as: network, storage.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj] ubeh}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hjh ubeh}(h!]h#]h%]h']h)]colsKuh+jf hjU ubeh}(h!]id1ah#]colwidths-givenah%]h']h)]uh+jS hju hhhNhNubeh}(h!]container-runtime-servicesah#]h%]container runtime servicesah']h)]uh+h
hj hhhh,hKkubh)}(hhh](h)}(hCPU Managementh]hCPU Management}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hKubh?)}(hX CPU management has policies to determine placement preferences to use for workloads that are sensitive to cache
affinity or latency, and so the workloads must not be moved by OS scheduler or throttled by kubelet. Additionally, some
workloads are sensitive to differences between physical cores and SMT, while others (like DPDK-based workloads) are
designed to run on isolated CPUs (like on Linux with cpuset-based selection of CPUs and isolcpus kernel parameter
specifying cores isolated from general SMP balancing and scheduler algorithms).h]hX CPU management has policies to determine placement preferences to use for workloads that are sensitive to cache
affinity or latency, and so the workloads must not be moved by OS scheduler or throttled by kubelet. Additionally, some
workloads are sensitive to differences between physical cores and SMT, while others (like DPDK-based workloads) are
designed to run on isolated CPUs (like on Linux with cpuset-based selection of CPUs and isolcpus kernel parameter
specifying cores isolated from general SMP balancing and scheduler algorithms).}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubh?)}(hKubernetes `CPU Manager `__ works with
Topology Manager. Special care needs to be taken of:h](hKubernetes }(hKubernetes hj hhhNhNubh)}(h^`CPU Manager `__h]hCPU Manager}(hCPU Managerhj hhhNhNubah}(h!]h#]h%]h']h)]nameCPU ManagerhLhttps://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/uh+hhj ubh@ works with
Topology Manager. Special care needs to be taken of:}(h@ works with
Topology Manager. Special care needs to be taken of:hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubj )}(hhh](j )}(hXX Supporting isolated CPUs: Using kubelet `Reserved CPUs
`__
and Linux isolcpus allows configuration where only isolcpus are allocatable to pods. Scheduling pods to such nodes
can be influenced with taints, tolerations and node affinity.h]h?)}(hXX Supporting isolated CPUs: Using kubelet `Reserved CPUs
`__
and Linux isolcpus allows configuration where only isolcpus are allocatable to pods. Scheduling pods to such nodes
can be influenced with taints, tolerations and node affinity.h](h(Supporting isolated CPUs: Using kubelet }(h(Supporting isolated CPUs: Using kubelet hj hhhNhNubh)}(h`Reserved CPUs
`__h]h
Reserved CPUs}(h
Reserved CPUshj hhhNhNubah}(h!]h#]h%]h']h)]name
Reserved CPUshkhttps://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#explicitly-reserved-cpu-listuh+hhj ubh
and Linux isolcpus allows configuration where only isolcpus are allocatable to pods. Scheduling pods to such nodes
can be influenced with taints, tolerations and node affinity.}(h
and Linux isolcpus allows configuration where only isolcpus are allocatable to pods. Scheduling pods to such nodes
can be influenced with taints, tolerations and node affinity.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(hDifferentiating between physical cores and SMT: When requesting even number of CPU cores for pods, scheduling
can be influenced with taints, tolerations, and node affinity.
h]h?)}(hDifferentiating between physical cores and SMT: When requesting even number of CPU cores for pods, scheduling
can be influenced with taints, tolerations, and node affinity.h]hDifferentiating between physical cores and SMT: When requesting even number of CPU cores for pods, scheduling
can be influenced with taints, tolerations, and node affinity.}(hj; hj9 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj5 ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hKhj hhubeh}(h!]cpu-managementah#]h%]cpu managementah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(h*Memory and Huge Pages Resources Managementh]h*Memory and Huge Pages Resources Management}(hj` hj^ hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj[ hhhh,hKubh?)}(hThe Reference Model requires the support of huge pages in i.cap.018 which is supported by upstream Kubernetes
(`documentation `__).h](hoThe Reference Model requires the support of huge pages in i.cap.018 which is supported by upstream Kubernetes
(}(hoThe Reference Model requires the support of huge pages in i.cap.018 which is supported by upstream Kubernetes
(hjl hhhNhNubh)}(h[`documentation `__h]h
documentation}(h
documentationhju hhhNhNubah}(h!]h#]h%]h']h)]namej} hGhttps://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/uh+hhjl ubh).}(h).hjl hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj[ hhubh?)}(hX For proper mapping of huge pages to scheduled pods, both need to have huge pages enabled in the operating system
(configured in kernel and mounted with correct permissions) and kubelet configuration. Multiple sizes of huge pages
can be enabled like 2 MiB and 1 GiB.h]hX For proper mapping of huge pages to scheduled pods, both need to have huge pages enabled in the operating system
(configured in kernel and mounted with correct permissions) and kubelet configuration. Multiple sizes of huge pages
can be enabled like 2 MiB and 1 GiB.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj[ hhubh?)}(hX For some applications, huge pages should be allocated to account for consideration of the underlying HW topology.
`The Memory Manager `__
(added to Kubernetes v1.21 as alpha feature) enables the feature of guaranteed memory and huge pages allocation
for pods in the Guaranteed QoS class. The Memory Manager feeds the Topology Manager with hints for most suitable
NUMA affinity.h](hrFor some applications, huge pages should be allocated to account for consideration of the underlying HW topology.
}(hrFor some applications, huge pages should be allocated to account for consideration of the underlying HW topology.
hj hhhNhNubh)}(h\`The Memory Manager `__h]hThe Memory Manager}(hThe Memory Managerhj hhhNhNubah}(h!]h#]h%]h']h)]nameThe Memory ManagerhChttps://kubernetes.io/docs/tasks/administer-cluster/memory-manager/uh+hhj ubh
(added to Kubernetes v1.21 as alpha feature) enables the feature of guaranteed memory and huge pages allocation
for pods in the Guaranteed QoS class. The Memory Manager feeds the Topology Manager with hints for most suitable
NUMA affinity.}(h
(added to Kubernetes v1.21 as alpha feature) enables the feature of guaranteed memory and huge pages allocation
for pods in the Guaranteed QoS class. The Memory Manager feeds the Topology Manager with hints for most suitable
NUMA affinity.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj[ hhubeh}(h!]*memory-and-huge-pages-resources-managementah#]h%]*memory and huge pages resources managementah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(hHardware Topology Managementh]hHardware Topology Management}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hKubh?)}(hScheduling pods across NUMA boundaries can result in lower performance and higher latencies. This would be an issue
for applications that require optimisations of CPU isolation, memory and device locality.h]hScheduling pods across NUMA boundaries can result in lower performance and higher latencies. This would be an issue
for applications that require optimisations of CPU isolation, memory and device locality.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubh?)}(hXP Kubernetes supports Topology policy per node as beta feature
(`documentation `__) and not per pod.
The Topology Manager receives Topology information from Hint Providers which identify NUMA nodes (defined as server
system architecture divisions of CPU sockets) and preferred scheduling. In the case of the pod with Guaranteed QoS
class having integer CPU requests, the static CPU Manager policy would return topology hints relating to the exclusive
CPU and the Device Manager would provide hints for the requested device.h](h>Kubernetes supports Topology policy per node as beta feature
(}(h>Kubernetes supports Topology policy per node as beta feature
(hj hhhNhNubh)}(hY`documentation `__h]h
documentation}(h
documentationhj hhhNhNubah}(h!]h#]h%]h']h)]namej hEhttps://kubernetes.io/docs/tasks/administer-cluster/topology-manager/uh+hhj ubhX ) and not per pod.
The Topology Manager receives Topology information from Hint Providers which identify NUMA nodes (defined as server
system architecture divisions of CPU sockets) and preferred scheduling. In the case of the pod with Guaranteed QoS
class having integer CPU requests, the static CPU Manager policy would return topology hints relating to the exclusive
CPU and the Device Manager would provide hints for the requested device.}(hX ) and not per pod.
The Topology Manager receives Topology information from Hint Providers which identify NUMA nodes (defined as server
system architecture divisions of CPU sockets) and preferred scheduling. In the case of the pod with Guaranteed QoS
class having integer CPU requests, the static CPU Manager policy would return topology hints relating to the exclusive
CPU and the Device Manager would provide hints for the requested device.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubh?)}(hXL If case that memory or huge pages are not considered by the Topology Manager, it can be done by the operating system
providing best-effort local page allocation for containers as long as there is sufficient free local memory on the
node, or with Control Groups (cgroups) cpuset subsystem that can isolate memory to single NUMA node.h]hXL If case that memory or huge pages are not considered by the Topology Manager, it can be done by the operating system
providing best-effort local page allocation for containers as long as there is sufficient free local memory on the
node, or with Control Groups (cgroups) cpuset subsystem that can isolate memory to single NUMA node.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubeh}(h!]hardware-topology-managementah#]h%]hardware topology managementah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(hNode Feature Discoveryh]hNode Feature Discovery}(hj, hj* hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj' hhhh,hKubh?)}(hX `Node Feature Discovery `__
(NFD) can run on every node as a daemon or as a job. NFD detects detailed hardware and software capabilities of each
node and then advertises those capabilities as node labels. Those node labels can be used in scheduling pods by using
Node Selector or Node Affinity for pods that require such capabilities.h](h)}(hs`Node Feature Discovery `__h]hNode Feature Discovery}(hNode Feature Discoveryhj< hhhNhNubah}(h!]h#]h%]h']h)]nameNode Feature DiscoveryhVhttps://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/index.htmluh+hhj8 ubhX3
(NFD) can run on every node as a daemon or as a job. NFD detects detailed hardware and software capabilities of each
node and then advertises those capabilities as node labels. Those node labels can be used in scheduling pods by using
Node Selector or Node Affinity for pods that require such capabilities.}(hX3
(NFD) can run on every node as a daemon or as a job. NFD detects detailed hardware and software capabilities of each
node and then advertises those capabilities as node labels. Those node labels can be used in scheduling pods by using
Node Selector or Node Affinity for pods that require such capabilities.hj8 hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj' hhubeh}(h!]node-feature-discoveryah#]h%]node feature discoveryah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(hDevice Plugin Frameworkh]hDevice Plugin Framework}(hjf hjd hhhNhNubah}(h!]h#]h%]h']h)]uh+hhja hhhh,hKubh?)}(hXR `Device Plugin Framework `__
advertises device hardware resources to kubelet with which vendors can implement plugins for devices that may require
vendor-specific activation and life cycle management, and securely maps these devices to containers.h](h)}(hw`Device Plugin Framework `__h]hDevice Plugin Framework}(hDevice Plugin Frameworkhjv hhhNhNubah}(h!]h#]h%]h']h)]nameDevice Plugin FrameworkhYhttps://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/uh+hhjr ubh
advertises device hardware resources to kubelet with which vendors can implement plugins for devices that may require
vendor-specific activation and life cycle management, and securely maps these devices to containers.}(h
advertises device hardware resources to kubelet with which vendors can implement plugins for devices that may require
vendor-specific activation and life cycle management, and securely maps these devices to containers.hjr hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhja hhubh?)}(hOFigure 3-2 shows in four steps how device plugins operate on a Kubernetes node:h]hOFigure 3-2 shows in four steps how device plugins operate on a Kubernetes node:}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhja hhubj )}(hhh](j )}(hX0 1: During setup, the cluster administrator (more in :ref:`chapters/chapter03:operator pattern`)
knows or discovers (as per :ref:`chapters/chapter03:node feature discovery`) what kind of
devices are present on the different nodes, selects which devices to enable and deploys the associated device
plugins.h]h?)}(hX0 1: During setup, the cluster administrator (more in :ref:`chapters/chapter03:operator pattern`)
knows or discovers (as per :ref:`chapters/chapter03:node feature discovery`) what kind of
devices are present on the different nodes, selects which devices to enable and deploys the associated device
plugins.h](h41: During setup, the cluster administrator (more in }(h41: During setup, the cluster administrator (more in hj hhhNhNubhJ)}(h*:ref:`chapters/chapter03:operator pattern`h]hP)}(hj h]h#chapters/chapter03:operator pattern}(hhhj hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhj ubah}(h!]h#]h%]h']h)]refdochh refdomainj reftyperefrefexplicitrefwarnhn#chapters/chapter03:operator patternuh+hIhh,hKhj ubh)
knows or discovers (as per }(h)
knows or discovers (as per hj hhhNhNubhJ)}(h0:ref:`chapters/chapter03:node feature discovery`h]hP)}(hj h]h)chapters/chapter03:node feature discovery}(hhhj hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhj ubah}(h!]h#]h%]h']h)]refdochh refdomainj reftyperefrefexplicitrefwarnhn)chapters/chapter03:node feature discoveryuh+hIhh,hKhj ubh) what kind of
devices are present on the different nodes, selects which devices to enable and deploys the associated device
plugins.}(h) what kind of
devices are present on the different nodes, selects which devices to enable and deploys the associated device
plugins.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(h2: The plugin reports the devices it found on the node to the Kubelet device manager and starts its gRPC server
to monitor the devices.h]h?)}(h2: The plugin reports the devices it found on the node to the Kubelet device manager and starts its gRPC server
to monitor the devices.h]h2: The plugin reports the devices it found on the node to the Kubelet device manager and starts its gRPC server
to monitor the devices.}(hj
hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(hc3: A user submits a pod specification (workload manifest file) requesting a certain type of device.h]h?)}(hj! h]hc3: A user submits a pod specification (workload manifest file) requesting a certain type of device.}(hj! hj# hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(h4: The scheduler determines a suitable node based on device availability and the local kubelet assigns a specific
device to the pod's containers.
h]h?)}(h4: The scheduler determines a suitable node based on device availability and the local kubelet assigns a specific
device to the pod's containers.h]h4: The scheduler determines a suitable node based on device availability and the local kubelet assigns a specific
device to the pod’s containers.}(hj< hj: hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj6 ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hKhja hhubh)}(hm.. image:: ../figures/Ch3_Figure_Device_Plugin_operation.png
:alt: "Figure 3-2: Device Plugin Operation"
h]h}(h!]h#]h%]h']h)]alt%"Figure 3-2: Device Plugin Operation"uri.figures/Ch3_Figure_Device_Plugin_operation.pngh}hja suh+hhja hhhh,hNubh?)}(h'**Figure 3-2:** Device Plugin Operationh](h)}(h**Figure 3-2:**h]hFigure 3-2:}(hhhjg hhhNhNubah}(h!]h#]h%]h']h)]uh+hhjc ubh Device Plugin Operation}(h Device Plugin Operationhjc hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hM
hja hhubh?)}(hX An example of often used device plugin is the
`SR-IOV Network Device Plugin `__, that discovers
and advertises SR-IOV Virtual Functions (VFs) available on a Kubernetes node, and is used to map VFs to scheduled pods.
To use it, the SR-IOV CNI is required, as well as a CNI multiplexer plugin (such as
`Multus CNI `__ or `DANM `__),
to provision additional secondary network interfaces for VFs (beyond the primary network interface). The SR-IOV CNI
during pod creation allocates a SR-IOV VF to a pod's network namespace using the VF information given by the meta
plugin, and on pod deletion releases the VF from the pod.h](h.An example of often used device plugin is the
}(h.An example of often used device plugin is the
hj hhhNhNubh)}(hf`SR-IOV Network Device Plugin `__h]hSR-IOV Network Device Plugin}(hSR-IOV Network Device Pluginhj hhhNhNubah}(h!]h#]h%]h']h)]nameSR-IOV Network Device PluginhChttps://github.com/k8snetworkplumbingwg/sriov-network-device-pluginuh+hhj ubh, that discovers
and advertises SR-IOV Virtual Functions (VFs) available on a Kubernetes node, and is used to map VFs to scheduled pods.
To use it, the SR-IOV CNI is required, as well as a CNI multiplexer plugin (such as
}(h, that discovers
and advertises SR-IOV Virtual Functions (VFs) available on a Kubernetes node, and is used to map VFs to scheduled pods.
To use it, the SR-IOV CNI is required, as well as a CNI multiplexer plugin (such as
hj hhhNhNubh)}(hC`Multus CNI `__h]h
Multus CNI}(h
Multus CNIhj hhhNhNubah}(h!]h#]h%]h']h)]name
Multus CNIh2https://github.com/k8snetworkplumbingwg/multus-cniuh+hhj ubh or }(h or hj hhhNhNubh)}(h(`DANM `__h]hDANM}(hDANMhj hhhNhNubah}(h!]h#]h%]h']h)]namej hhttps://github.com/nokia/danmuh+hhj ubhX$ ),
to provision additional secondary network interfaces for VFs (beyond the primary network interface). The SR-IOV CNI
during pod creation allocates a SR-IOV VF to a pod’s network namespace using the VF information given by the meta
plugin, and on pod deletion releases the VF from the pod.}(hX" ),
to provision additional secondary network interfaces for VFs (beyond the primary network interface). The SR-IOV CNI
during pod creation allocates a SR-IOV VF to a pod's network namespace using the VF information given by the meta
plugin, and on pod deletion releases the VF from the pod.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhja hhubeh}(h!]device-plugin-frameworkah#]h%]device plugin frameworkah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(hHardware Accelerationh]hHardware Acceleration}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hMubh?)}(hXX Hardware Acceleration Abstraction in RM
:ref:`ref_model:chapters/chapter03:hardware acceleration abstraction` describes types of hardware
acceleration (CPU instructions, Fixed function accelerators, Firmware-programmable adapters, SmartNICs and
SmartSwitches), and usage for Infrastructure Level Acceleration and Application Level Acceleration.h](h(Hardware Acceleration Abstraction in RM
}(h(Hardware Acceleration Abstraction in RM
hj hhhNhNubhJ)}(hE:ref:`ref_model:chapters/chapter03:hardware acceleration abstraction`h]hP)}(hj h]h>ref_model:chapters/chapter03:hardware acceleration abstraction}(hhhj hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhj ubah}(h!]h#]h%]h']h)]refdochh refdomainj reftyperefrefexplicitrefwarnhn>ref_model:chapters/chapter03:hardware acceleration abstractionuh+hIhh,hMhj ubh describes types of hardware
acceleration (CPU instructions, Fixed function accelerators, Firmware-programmable adapters, SmartNICs and
SmartSwitches), and usage for Infrastructure Level Acceleration and Application Level Acceleration.}(h describes types of hardware
acceleration (CPU instructions, Fixed function accelerators, Firmware-programmable adapters, SmartNICs and
SmartSwitches), and usage for Infrastructure Level Acceleration and Application Level Acceleration.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj hhubh?)}(hzScheduling pods that require or prefer to run on nodes with hardware accelerators will depend on type of accelerator
used:h]hzScheduling pods that require or prefer to run on nodes with hardware accelerators will depend on type of accelerator
used:}(hj" hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM hj hhubj )}(hhh](j )}(h9CPU instructions can be found with Node Feature Discoveryh]h?)}(hj3 h]h9CPU instructions can be found with Node Feature Discovery}(hj3 hj5 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM#hj1 ubah}(h!]h#]h%]h']h)]uh+j hj. hhhh,hNubj )}(hFixed function accelerators, Firmware-programmable network adapters and SmartNICs can be found and mapped to pods
by using Device Plugin.
h]h?)}(hFixed function accelerators, Firmware-programmable network adapters and SmartNICs can be found and mapped to pods
by using Device Plugin.h]hFixed function accelerators, Firmware-programmable network adapters and SmartNICs can be found and mapped to pods
by using Device Plugin.}(hjN hjL hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM$hjH ubah}(h!]h#]h%]h']h)]uh+j hj. hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hM#hj hhubeh}(h!]hardware-accelerationah#]h%]hardware accelerationah']h)]uh+h
hj hhhh,hMubh)}(hhh](h)}(h/Scheduling Pods with Non-resilient Applicationsh]h/Scheduling Pods with Non-resilient Applications}(hjs hjq hhhNhNubah}(h!]h#]h%]h']h)]uh+hhjn hhhh,hM(ubh?)}(hXH Non-resilient applications are sensitive to platform impairments on Compute like pausing CPU cycles (for example
because of OS scheduler) or Networking like packet drops, reordering or latencies. Such applications need to be
carefully scheduled on nodes and preferably still decoupled from infrastructure details of those nodes.h]hXH Non-resilient applications are sensitive to platform impairments on Compute like pausing CPU cycles (for example
because of OS scheduler) or Networking like packet drops, reordering or latencies. Such applications need to be
carefully scheduled on nodes and preferably still decoupled from infrastructure details of those nodes.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM*hjn hhubjT )}(hhh](h)}(hTCategories of applications, requirements for scheduling pods and Kubernetes featuresh]hTCategories of applications, requirements for scheduling pods and Kubernetes features}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhh,hM.hj ubjg )}(hhh](jl )}(hhh]h}(h!]h#]h%]h']h)]jv K
uh+jk hj ubjl )}(hhh]h}(h!]h#]h%]h']h)]jv Kuh+jk hj ubjl )}(hhh]h}(h!]h#]h%]h']h)]jv Kuh+jk hj ubjl )}(hhh]h}(h!]h#]h%]h']h)]jv Kuh+jk hj ubjl )}(hhh]h}(h!]h#]h%]h']h)]jv Kuh+jk hj ubj )}(hhh]j )}(hhh](j )}(hhh]h?)}(hNo.h]hNo.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM2hj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hIntensive onh]hIntensive on}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM3hj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hNot intensive onh]hNot intensive on}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM4hj
ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hUsing hardware accelerationh]hUsing hardware acceleration}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM5hj
ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(h)Requirements for optimised pod schedulingh]h)Requirements for optimised pod scheduling}(hj5
hj3
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM6hj0
ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh](j )}(hhh]h?)}(h1h]h1}(hj^
hj\
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM7hjY
ubah}(h!]h#]h%]h']h)]uh+j hjV
ubj )}(hhh]h?)}(hComputeh]hCompute}(hju
hjs
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM8hjp
ubah}(h!]h#]h%]h']h)]uh+j hjV
ubj )}(hhh]h?)}(hNetworking (dataplane)h]hNetworking (dataplane)}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM9hj
ubah}(h!]h#]h%]h']h)]uh+j hjV
ubj )}(hhh]h?)}(hNoh]hNo}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM:hj
ubah}(h!]h#]h%]h']h)]uh+j hjV
ubj )}(hhh]h?)}(hCPU Managerh]hCPU Manager}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM;hj
ubah}(h!]h#]h%]h']h)]uh+j hjV
ubeh}(h!]h#]h%]h']h)]uh+j hjS
ubj )}(hhh](j )}(hhh]h?)}(h2h]h2}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM<hj
ubah}(h!]h#]h%]h']h)]uh+j hj
ubj )}(hhh]h?)}(hComputeh]hCompute}(h<