sphinx.addnodesdocument)}( rawsource children]docutils.nodessection)}(hhh](h title)}(hHigh Level Architectureh]h TextHigh Level Architecture}(hhparenth _documenthsourceNlineNuba
attributes}(ids]classes]names]dupnames]backrefs]utagnamehhhhhh\/home/opnfv/slave_root/workspace/cntt-tox-ra2/doc/ref_arch/kubernetes/chapters/chapter03.rsthKubh)}(hhh](h)}(hIntroductionh]hIntroduction}(hh2hh0hhhNhNubah}(h!]h#]h%]h']h)]uh+hhh-hhhh,hKubh paragraph)}(hXA The Anuket Kubernetes Reference Architecture (RA) is intended to be an industry
standard independent Kubernetes reference architecture that is not tied to any
specific offering or distribution. No vendor-specific enhancements are required
in order to achieve conformance to the principles of Anuket specifications; conformance is achieved by
using upstream components or features that are developed by the open source
community. This allows operators to have a common Kubernetes-based architecture
that supports any conformant VNF or CNF deployed on it to operate as expected.
The purpose of this chapter is to outline all the components required to provide
Kubernetes in a consistent and reliable way. The specification of how to use
these components is detailed in Chapter 04 :ref:`chapters/chapter04:component level architecture`.h](hX
The Anuket Kubernetes Reference Architecture (RA) is intended to be an industry
standard independent Kubernetes reference architecture that is not tied to any
specific offering or distribution. No vendor-specific enhancements are required
in order to achieve conformance to the principles of Anuket specifications; conformance is achieved by
using upstream components or features that are developed by the open source
community. This allows operators to have a common Kubernetes-based architecture
that supports any conformant VNF or CNF deployed on it to operate as expected.
The purpose of this chapter is to outline all the components required to provide
Kubernetes in a consistent and reliable way. The specification of how to use
these components is detailed in Chapter 04 }(hX
The Anuket Kubernetes Reference Architecture (RA) is intended to be an industry
standard independent Kubernetes reference architecture that is not tied to any
specific offering or distribution. No vendor-specific enhancements are required
in order to achieve conformance to the principles of Anuket specifications; conformance is achieved by
using upstream components or features that are developed by the open source
community. This allows operators to have a common Kubernetes-based architecture
that supports any conformant VNF or CNF deployed on it to operate as expected.
The purpose of this chapter is to outline all the components required to provide
Kubernetes in a consistent and reliable way. The specification of how to use
these components is detailed in Chapter 04 hh@hhhNhNubh pending_xref)}(h6:ref:`chapters/chapter04:component level architecture`h]h inline)}(hhMh]h/chapters/chapter04:component level architecture}(hhhhQhhhNhNubah}(h!]h#](xrefstdstd-refeh%]h']h)]uh+hOhhKubah}(h!]h#]h%]h']h)]refdocchapters/chapter03 refdomainh\reftyperefrefexplicitrefwarn reftarget/chapters/chapter04:component level architectureuh+hIhh,hKhh@ubh.}(h.hh@hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhh-hhubh?)}(hX; Kubernetes is already a well documented and widely deployed Open Source project
managed by the Cloud Native Computing Foundation (CNCF). Full documentation of
the Kubernetes code and project can be found at
`https://kubernetes.io/docs/home/ `__. The
following chapters will only describe the specific features required by the Anuket
Reference Architecture, and how they would be expected to be implemented. For
any information related to standard Kubernetes features and capabilities, refer
back to the standard Kubernetes documentation.h](hKubernetes is already a well documented and widely deployed Open Source project
managed by the Cloud Native Computing Foundation (CNCF). Full documentation of
the Kubernetes code and project can be found at
}(hKubernetes is already a well documented and widely deployed Open Source project
managed by the Cloud Native Computing Foundation (CNCF). Full documentation of
the Kubernetes code and project can be found at
hh{hhhNhNubh reference)}(hG`https://kubernetes.io/docs/home/ `__h]h https://kubernetes.io/docs/home/}(h https://kubernetes.io/docs/home/hhhhhNhNubah}(h!]h#]h%]h']h)]namehrefuri https://kubernetes.io/docs/home/uh+hhh{ubhX% . The
following chapters will only describe the specific features required by the Anuket
Reference Architecture, and how they would be expected to be implemented. For
any information related to standard Kubernetes features and capabilities, refer
back to the standard Kubernetes documentation.}(hX% . The
following chapters will only describe the specific features required by the Anuket
Reference Architecture, and how they would be expected to be implemented. For
any information related to standard Kubernetes features and capabilities, refer
back to the standard Kubernetes documentation.hh{hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhh-hhubh?)}(hX While this reference architecture provides options for pluggable components such
as service mesh and other plugins that might be used, the focus of the
reference architecture is on the abstracted interfaces and features that are
required for telco type workload management and execution.h]hX While this reference architecture provides options for pluggable components such
as service mesh and other plugins that might be used, the focus of the
reference architecture is on the abstracted interfaces and features that are
required for telco type workload management and execution.}(hhhhhhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhh-hhubh?)}(hX Chapter 5 of the Reference Model (RM) describes the
hardware and software profiles that are
descriptions of the capabilities and features that the Cloud Infrastructure
provide to the workloads. As of v2.0, Figure 5-3 in the RM (also shown below)
depicts a high level view of the software profile features that apply to each
instance profile (Basic and High Performance). For more information on the
instance profiles please refer to :ref:`ref_model:chapters/chapter04:profiles`.h](hX Chapter 5 of the Reference Model (RM) describes the
hardware and software profiles that are
descriptions of the capabilities and features that the Cloud Infrastructure
provide to the workloads. As of v2.0, Figure 5-3 in the RM (also shown below)
depicts a high level view of the software profile features that apply to each
instance profile (Basic and High Performance). For more information on the
instance profiles please refer to }(hX Chapter 5 of the Reference Model (RM) describes the
hardware and software profiles that are
descriptions of the capabilities and features that the Cloud Infrastructure
provide to the workloads. As of v2.0, Figure 5-3 in the RM (also shown below)
depicts a high level view of the software profile features that apply to each
instance profile (Basic and High Performance). For more information on the
instance profiles please refer to hhhhhNhNubhJ)}(h,:ref:`ref_model:chapters/chapter04:profiles`h]hP)}(hhh]h%ref_model:chapters/chapter04:profiles}(hhhhhhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhhubah}(h!]h#]h%]h']h)]refdochh refdomainhȌreftyperefrefexplicitrefwarnhn%ref_model:chapters/chapter04:profilesuh+hIhh,hK hhubh.}(hhthhhhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hK hh-hhubh image)}(hu.. image:: ../../../ref_model/figures/RM-ch05-sw-profile.png
:alt: "Figure 5-3 (from RM): NFVI softwareprofiles"
h]h}(h!]h#]h%]h']h)]alt-"Figure 5-3 (from RM): NFVI softwareprofiles"uri.../../ref_model/figures/RM-ch05-sw-profile.png
candidates}*hsuh+hhh-hhhh,hNubh?)}(h/**Figure 5-3 (from RM):** NFVI softwareprofilesh](h strong)}(h**Figure 5-3 (from RM):**h]hFigure 5-3 (from RM):}(hhhhhhhNhNubah}(h!]h#]h%]h']h)]uh+hhhubh NFVI softwareprofiles}(h NFVI softwareprofileshhhhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hK,hh-hhubh?)}(hwIn addition, the RM Figure 5-4 (shown below) depicts the hardware profile features
that apply to each instance profile.h]hwIn addition, the RM Figure 5-4 (shown below) depicts the hardware profile features
that apply to each instance profile.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hK.hh-hhubh)}(h.. image:: ../../../ref_model/figures/RM_chap5_fig_5_4_HW_profile.png
:alt: "Figure 5-4 (from RM): NFVI hardwareprofiles and host associated capabilities"
h]h}(h!]h#]h%]h']h)]altN"Figure 5-4 (from RM): NFVI hardwareprofiles and host associated capabilities"uri7../../ref_model/figures/RM_chap5_fig_5_4_HW_profile.pngh}hj1 suh+hhh-hhhh,hNubh?)}(hP**Figure 5-4 (from RM):** NFVI hardwareprofiles and host associated capabilitiesh](h)}(h**Figure 5-4 (from RM):**h]hFigure 5-4 (from RM):}(hhhj7 hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj3 ubh7 NFVI hardwareprofiles and host associated capabilities}(h7 NFVI hardwareprofiles and host associated capabilitieshj3 hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hK5hh-hhubh?)}(hX The features and capabilities described in the software and hardware profiles
are considered throughout this RA, with the RA requirements traceability to the
RM requirements formally documented in
:ref:`chapters/chapter02:architecture requirements` of this RA.h](hThe features and capabilities described in the software and hardware profiles
are considered throughout this RA, with the RA requirements traceability to the
RM requirements formally documented in
}(hThe features and capabilities described in the software and hardware profiles
are considered throughout this RA, with the RA requirements traceability to the
RM requirements formally documented in
hjP hhhNhNubhJ)}(h3:ref:`chapters/chapter02:architecture requirements`h]hP)}(hj[ h]h,chapters/chapter02:architecture requirements}(hhhj] hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhjY ubah}(h!]h#]h%]h']h)]refdochh refdomainjg reftyperefrefexplicitrefwarnhn,chapters/chapter02:architecture requirementsuh+hIhh,hK7hjP ubh of this RA.}(h of this RA.hjP hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hK7hh-hhubeh}(h!]introductionah#]h%]introductionah']h)]uh+h
hhhhhh,hKubh)}(hhh](h)}(hInfrastructure Servicesh]hInfrastructure Services}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hK=ubh)}(hhh](h)}(hContainer Compute Servicesh]hContainer Compute Services}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hK@ubh?)}(hX` The primary interface between the Physical / Virtual Infrastructure and any
container-relevant components is the Kubernetes Node Operating System. This is
the OS within which the container runtime exists, and within which the
containers run (and therefore, the OS whose kernel is shared by the referenced
containers). This is shown in Figure 3-1 below.h]hX` The primary interface between the Physical / Virtual Infrastructure and any
container-relevant components is the Kubernetes Node Operating System. This is
the OS within which the container runtime exists, and within which the
containers run (and therefore, the OS whose kernel is shared by the referenced
containers). This is shown in Figure 3-1 below.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKBhj hhubh)}(h_.. image:: ../figures/ch03_hostOS.png
:alt: "Figure 3-1: Kubernetes Node Operating System"
h]h}(h!]h#]h%]h']h)]alt."Figure 3-1: Kubernetes Node Operating System"urifigures/ch03_hostOS.pngh}hj suh+hhj hhhh,hNubh?)}(h0**Figure 3-1:** Kubernetes Node Operating Systemh](h)}(h**Figure 3-1:**h]hFigure 3-1:}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj ubh! Kubernetes Node Operating System}(h! Kubernetes Node Operating Systemhj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKLhj hhubh?)}(hHThe Kubernetes Node OS (as with any OS) consists of two main components:h]hHThe Kubernetes Node OS (as with any OS) consists of two main components:}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKNhj hhubh bullet_list)}(hhh](h list_item)}(hKernel spaceh]h?)}(hj h]hKernel space}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKPhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(hUser space
h]h?)}(h
User spaceh]h
User space}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKQhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubeh}(h!]h#]h%]h']h)]bullet-uh+j hh,hKPhj hhubh?)}(hX The Kernel is the tightly controlled space that provides an API to applications
running in the user space (which usually have their own southbound interface in
an interpreter or libraries). Key containerisation capabilities such as Control
Groups (cgroups) and namespaces are kernel features, and are used and managed by
the container runtime in order to provide isolation between the user space
processes, which would also include the container itself as well as any
processes running within it. The security of the Kubernetes Node OS and its
relationship to the container and the applications running within the container
or containers is essential to the overall security posture of the entire system,
and must be appropriately secured to ensure processes running in one container
cannot escalate their privileges or otherwise affect processes running in an
adjacent container. An example and more details of this concept can be found in
:ref:`chapters/chapter06:api and feature testing requirements`.h](hX The Kernel is the tightly controlled space that provides an API to applications
running in the user space (which usually have their own southbound interface in
an interpreter or libraries). Key containerisation capabilities such as Control
Groups (cgroups) and namespaces are kernel features, and are used and managed by
the container runtime in order to provide isolation between the user space
processes, which would also include the container itself as well as any
processes running within it. The security of the Kubernetes Node OS and its
relationship to the container and the applications running within the container
or containers is essential to the overall security posture of the entire system,
and must be appropriately secured to ensure processes running in one container
cannot escalate their privileges or otherwise affect processes running in an
adjacent container. An example and more details of this concept can be found in
}(hX The Kernel is the tightly controlled space that provides an API to applications
running in the user space (which usually have their own southbound interface in
an interpreter or libraries). Key containerisation capabilities such as Control
Groups (cgroups) and namespaces are kernel features, and are used and managed by
the container runtime in order to provide isolation between the user space
processes, which would also include the container itself as well as any
processes running within it. The security of the Kubernetes Node OS and its
relationship to the container and the applications running within the container
or containers is essential to the overall security posture of the entire system,
and must be appropriately secured to ensure processes running in one container
cannot escalate their privileges or otherwise affect processes running in an
adjacent container. An example and more details of this concept can be found in
hj4 hhhNhNubhJ)}(h>:ref:`chapters/chapter06:api and feature testing requirements`h]hP)}(hj? h]h7chapters/chapter06:api and feature testing requirements}(hhhjA hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhj= ubah}(h!]h#]h%]h']h)]refdochh refdomainjK reftyperefrefexplicitrefwarnhn7chapters/chapter06:api and feature testing requirementsuh+hIhh,hKShj4 ubh.}(hhthj4 hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKShj hhubh?)}(hX_ It is important to note that the container runtime itself is also a set of
processes that run in user space, and therefore also interact with the kernel
via system calls. Many diagrams will show containers as running on top of the
runtime, or inside the runtime. More accurately, the containers themselves are
simply processes running within an OS, the container runtime is simply another
set of processes that are used to manage these containers (pull, run, delete,
etc.), and the kernel features required to provide the isolation mechanisms
(cgroups, namespaces, filesystems, etc.) between the components.h]hX_ It is important to note that the container runtime itself is also a set of
processes that run in user space, and therefore also interact with the kernel
via system calls. Many diagrams will show containers as running on top of the
runtime, or inside the runtime. More accurately, the containers themselves are
simply processes running within an OS, the container runtime is simply another
set of processes that are used to manage these containers (pull, run, delete,
etc.), and the kernel features required to provide the isolation mechanisms
(cgroups, namespaces, filesystems, etc.) between the components.}(hji hjg hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKahj hhubh)}(hhh](h)}(hContainer Runtime Servicesh]hContainer Runtime Services}(hjz hjx hhhNhNubah}(h!]h#]h%]h']h)]uh+hhju hhhh,hKkubh?)}(hX The Container Runtime is the component that runs within a Kubernetes Node
Operating System (OS) and manages the underlying OS functionality, such as
cgroups and namespaces (in Linux), in order to provide a service within which
container images can be executed and make use of the infrastructure resources
(compute, storage, networking and other I/O devices) abstracted by the Container
Host OS, based on API instructions from the kubelet.h]hX The Container Runtime is the component that runs within a Kubernetes Node
Operating System (OS) and manages the underlying OS functionality, such as
cgroups and namespaces (in Linux), in order to provide a service within which
container images can be executed and make use of the infrastructure resources
(compute, storage, networking and other I/O devices) abstracted by the Container
Host OS, based on API instructions from the kubelet.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKmhju hhubh?)}(hX There are a number of different container runtimes. The simplest form, low-level
container runtimes, just manage the OS capabilities such as cgroups and
namespaces, and then run commands from within those cgroups and namespaces. An
example of this type of runtime is runc, which underpins many of the
higher-level runtimes and is considered a reference implementation of the `Open
Container Initiative (OCI) runtime
spec `__. This specification
includes details on how an implementation (i.e. an actual container runtime such
as runc) must, for example, configure resource shares and limits (e.g. CPU,
Memory, IOPS) for the containers that Kubernetes (via the kubelet) schedules on
that host. This is important to ensure that the features and capabilities
described in :doc:`ref_model:chapters/chapter05` are
supported by this RA and delivered by any downstream Reference Implementations
(RIs) to the instance types defined in the RM.h](hXw There are a number of different container runtimes. The simplest form, low-level
container runtimes, just manage the OS capabilities such as cgroups and
namespaces, and then run commands from within those cgroups and namespaces. An
example of this type of runtime is runc, which underpins many of the
higher-level runtimes and is considered a reference implementation of the }(hXw There are a number of different container runtimes. The simplest form, low-level
container runtimes, just manage the OS capabilities such as cgroups and
namespaces, and then run commands from within those cgroups and namespaces. An
example of this type of runtime is runc, which underpins many of the
higher-level runtimes and is considered a reference implementation of the hj hhhNhNubh)}(ha`Open
Container Initiative (OCI) runtime
spec `__h]h,Open
Container Initiative (OCI) runtime
spec}(h,Open
Container Initiative (OCI) runtime
spechj hhhNhNubah}(h!]h#]h%]h']h)]name,Open Container Initiative (OCI) runtime spech.https://github.com/opencontainers/runtime-specuh+hhj ubhXY . This specification
includes details on how an implementation (i.e. an actual container runtime such
as runc) must, for example, configure resource shares and limits (e.g. CPU,
Memory, IOPS) for the containers that Kubernetes (via the kubelet) schedules on
that host. This is important to ensure that the features and capabilities
described in }(hXY . This specification
includes details on how an implementation (i.e. an actual container runtime such
as runc) must, for example, configure resource shares and limits (e.g. CPU,
Memory, IOPS) for the containers that Kubernetes (via the kubelet) schedules on
that host. This is important to ensure that the features and capabilities
described in hj hhhNhNubhJ)}(h#:doc:`ref_model:chapters/chapter05`h]hP)}(hj h]href_model:chapters/chapter05}(hhhj hhhNhNubah}(h!]h#](h[stdstd-doceh%]h']h)]uh+hOhj ubah}(h!]h#]h%]h']h)]refdochh refdomainj reftypedocrefexplicitrefwarnhnref_model:chapters/chapter05uh+hIhh,hKthj ubh are
supported by this RA and delivered by any downstream Reference Implementations
(RIs) to the instance types defined in the RM.}(h are
supported by this RA and delivered by any downstream Reference Implementations
(RIs) to the instance types defined in the RM.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKthju hhubh?)}(hX Where low-level runtimes are used for the execution of a container within an OS,
the more complex/complete high-level container runtimes are used for the general
management of container images - moving them to where they need to be executed,
unpacking them, and then passing them to the low-level runtime, which then
executes the container. These high-level runtimes also include a comprehensive
API that other applications (e.g. Kubernetes) can use to interact and manage the
containers. An example of this type of runtime is containerd, which provides the
features described above, before passing off the unpacked container image to
runc for execution.h]hX Where low-level runtimes are used for the execution of a container within an OS,
the more complex/complete high-level container runtimes are used for the general
management of container images - moving them to where they need to be executed,
unpacking them, and then passing them to the low-level runtime, which then
executes the container. These high-level runtimes also include a comprehensive
API that other applications (e.g. Kubernetes) can use to interact and manage the
containers. An example of this type of runtime is containerd, which provides the
features described above, before passing off the unpacked container image to
runc for execution.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hX For Kubernetes the important interface to consider for container management is
the `Kubernetes Container Runtime Interface
(CRI) `__.
This is an interface specification for any container runtime so that it is able
to integrate with the kubelet on a Kubernetes Node. The CRI decouples the
kubelet from the runtime that is running in the Host OS, meaning that the code
required to integrate kubelet with a container runtime is not part of the
kubelet itself (i.e. if a new container runtime is needed and it uses CRI, it
will work with kubelet). Examples of this type of runtime include containerd
(with CRI plugin) and cri-o, which is built specifically to work with
Kubernetes.h](hSFor Kubernetes the important interface to consider for container management is
the }(hSFor Kubernetes the important interface to consider for container management is
the hj hhhNhNubh)}(h`Kubernetes Container Runtime Interface
(CRI) `__h]h,Kubernetes Container Runtime Interface
(CRI)}(h,Kubernetes Container Runtime Interface
(CRI)hj hhhNhNubah}(h!]h#]h%]h']h)]name,Kubernetes Container Runtime Interface (CRI)hQhttps://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/uh+hhj ubhX! .
This is an interface specification for any container runtime so that it is able
to integrate with the kubelet on a Kubernetes Node. The CRI decouples the
kubelet from the runtime that is running in the Host OS, meaning that the code
required to integrate kubelet with a container runtime is not part of the
kubelet itself (i.e. if a new container runtime is needed and it uses CRI, it
will work with kubelet). Examples of this type of runtime include containerd
(with CRI plugin) and cri-o, which is built specifically to work with
Kubernetes.}(hX! .
This is an interface specification for any container runtime so that it is able
to integrate with the kubelet on a Kubernetes Node. The CRI decouples the
kubelet from the runtime that is running in the Host OS, meaning that the code
required to integrate kubelet with a container runtime is not part of the
kubelet itself (i.e. if a new container runtime is needed and it uses CRI, it
will work with kubelet). Examples of this type of runtime include containerd
(with CRI plugin) and cri-o, which is built specifically to work with
Kubernetes.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hTo fulfil ``req.inf.vir.01`` the architecture should support a container runtime
which provides the isolation of Operating System kernels.h](h
To fulfil }(h
To fulfil hj hhhNhNubh literal)}(h``req.inf.vir.01``h]hreq.inf.vir.01}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+j hj ubhn the architecture should support a container runtime
which provides the isolation of Operating System kernels.}(hn the architecture should support a container runtime
which provides the isolation of Operating System kernels.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hThe architecture must support a way to isolate the compute resources of the
infrastructure itself from the workloads compute resources.h]hThe architecture must support a way to isolate the compute resources of the
infrastructure itself from the workloads compute resources.}(hj9 hj7 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hThe basic semantics of Kubernetes, and the information found in manifests, defines the built-in Kubernetes objects and
their desired state.h]hThe basic semantics of Kubernetes, and the information found in manifests, defines the built-in Kubernetes objects and
their desired state.}(hjG hjE hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh?)}(hKubernetes built in objectsh]hKubernetes built in objects}(hjU hjS hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhju hhubh table)}(hhh]h tgroup)}(hhh](h colspec)}(hhh]h}(h!]h#]h%]h']h)]colwidthK:uh+jk hjh ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthK:uh+jk hjh ubh thead)}(hhh]h row)}(hhh](h entry)}(hhh]h?)}(hPod and workloadsh]hPod and workloads}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hDescriptionh]hDescription}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubah}(h!]h#]h%]h']h)]uh+j hjh ubh tbody)}(hhh](j )}(hhh](j )}(hhh]h?)}(h?`Pod: `__h]h)}(hj h]hPod:}(hPod:hj hhhNhNubah}(h!]h#]h%]h']h)]namej h3https://kubernetes.io/docs/concepts/workloads/pods/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hwPod is a collection of containers that can run on
a node. This resource is created by clients and
scheduled onto nodes.h]hwPod is a collection of containers that can run on
a node. This resource is created by clients and
scheduled onto nodes.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hX`ReplicaSet: `__h]h)}(hj h]hReplicaSet:}(hReplicaSet:hj hhhNhNubah}(h!]h#]h%]h']h)]namej hEhttps://kubernetes.io/docs/concepts/workloads/controllers/replicaset/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hYReplicaSet ensures that a specified number of pod
replicas are running at any given time.h]hYReplicaSet ensures that a specified number of pod
replicas are running at any given time.}(hj: hj8 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj5 ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hX`Deployment: `__h]h)}(hjZ h]hDeployment:}(hDeployment:hj\ hhhNhNubah}(h!]h#]h%]h']h)]namejc hEhttps://kubernetes.io/docs/concepts/workloads/controllers/deployment/uh+hhjX ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhjU ubah}(h!]h#]h%]h']h)]uh+j hjR ubj )}(hhh]h?)}(h@Deployment enables declarative updates for Pods and
ReplicaSets.h]h@Deployment enables declarative updates for Pods and
ReplicaSets.}(hj} hj{ hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhjx ubah}(h!]h#]h%]h']h)]uh+j hjR ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hV`DaemonSet: `__h]h)}(hj h]h
DaemonSet:}(h
DaemonSet:hj hhhNhNubah}(h!]h#]h%]h']h)]namej hDhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(h@A Daemon set ensures that the correct nodes run a copy
of a Pod.h]h@A Daemon set ensures that the correct nodes run a copy
of a Pod.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hJ`Job: `__h]h)}(hj h]hJob:}(hJob:hj hhhNhNubah}(h!]h#]h%]h']h)]namej h>https://kubernetes.io/docs/concepts/workloads/controllers/job/uh+hhj ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hA Job represent a task, it creates one or more Pods and
will continue to retry until the expected number of
successful completions is reached.h]hA Job represent a task, it creates one or more Pods and
will continue to retry until the expected number of
successful completions is reached.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hT`CronJob: `__h]h)}(hj# h]hCronJob:}(hCronJob:hj% hhhNhNubah}(h!]h#]h%]h']h)]namej, hDhttps://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/uh+hhj! ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hyA CronJob manages time-based Jobs, namely: once at a
specified point in time and repeatedly at a specified
point in time.h]hyA CronJob manages time-based Jobs, namely: once at a
specified point in time and repeatedly at a specified
point in time.}(hjF hjD hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhjA ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hZ`StatefulSet: `__h]h)}(hjf h]hStatefulSet:}(hStatefulSet:hjh hhhNhNubah}(h!]h#]h%]h']h)]namejo hFhttps://kubernetes.io/docs/concepts/workloads/controllers/statefulset/uh+hhjd ubah}(h!]h#]h%]h']h)]uh+h>hh,hKhja ubah}(h!]h#]h%]h']h)]uh+j hj^ ubj )}(hhh]h?)}(hmStatefulSet represents a set of pods with consistent
identities. Identities are defined as: network, storage.h]hmStatefulSet represents a set of pods with consistent
identities. Identities are defined as: network, storage.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj^ ubeh}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hjh ubeh}(h!]h#]h%]h']h)]colsKuh+jf hjc ubah}(h!]h#]h%]h']h)]uh+ja hju hhhh,hNubeh}(h!]container-runtime-servicesah#]h%]container runtime servicesah']h)]uh+h
hj hhhh,hKkubh)}(hhh](h)}(hCPU Managementh]hCPU Management}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hKubh?)}(hX CPU management has policies to determine placement preferences to use for workloads that are sensitive to cache affinity
or latency, and so the workloads must not be moved by OS scheduler or throttled by kubelet. Additionally, some workloads
are sensitive to differences between physical cores and SMT, while others (like DPDK-based workloads) are designed to
run on isolated CPUs (like on Linux with cpuset-based selection of CPUs and isolcpus kernel parameter specifying cores
isolated from general SMP balancing and scheduler algorithms).h]hX CPU management has policies to determine placement preferences to use for workloads that are sensitive to cache affinity
or latency, and so the workloads must not be moved by OS scheduler or throttled by kubelet. Additionally, some workloads
are sensitive to differences between physical cores and SMT, while others (like DPDK-based workloads) are designed to
run on isolated CPUs (like on Linux with cpuset-based selection of CPUs and isolcpus kernel parameter specifying cores
isolated from general SMP balancing and scheduler algorithms).}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubh?)}(hKubernetes `CPU Manager `__ works with
Topology Manager. Special care needs to be taken of:h](hKubernetes }(hKubernetes hj hhhNhNubh)}(h^`CPU Manager `__h]hCPU Manager}(hCPU Managerhj hhhNhNubah}(h!]h#]h%]h']h)]nameCPU ManagerhLhttps://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/uh+hhj ubh@ works with
Topology Manager. Special care needs to be taken of:}(h@ works with
Topology Manager. Special care needs to be taken of:hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubj )}(hhh](j )}(hXX Supporting isolated CPUs: Using kubelet `Reserved CPUs
`__
and Linux isolcpus allows configuration where only isolcpus are allocatable to pods. Scheduling pods to such nodes
can be influenced with taints, tolerations and node affinity.h]h?)}(hXX Supporting isolated CPUs: Using kubelet `Reserved CPUs
`__
and Linux isolcpus allows configuration where only isolcpus are allocatable to pods. Scheduling pods to such nodes
can be influenced with taints, tolerations and node affinity.h](h(Supporting isolated CPUs: Using kubelet }(h(Supporting isolated CPUs: Using kubelet hj hhhNhNubh)}(h`Reserved CPUs
`__h]h
Reserved CPUs}(h
Reserved CPUshj hhhNhNubah}(h!]h#]h%]h']h)]name
Reserved CPUshkhttps://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#explicitly-reserved-cpu-listuh+hhj ubh
and Linux isolcpus allows configuration where only isolcpus are allocatable to pods. Scheduling pods to such nodes
can be influenced with taints, tolerations and node affinity.}(h
and Linux isolcpus allows configuration where only isolcpus are allocatable to pods. Scheduling pods to such nodes
can be influenced with taints, tolerations and node affinity.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(hDifferentiating between physical cores and SMT: When requesting even number of CPU cores for pods, scheduling
can be influenced with taints, tolerations, and node affinity.
h]h?)}(hDifferentiating between physical cores and SMT: When requesting even number of CPU cores for pods, scheduling
can be influenced with taints, tolerations, and node affinity.h]hDifferentiating between physical cores and SMT: When requesting even number of CPU cores for pods, scheduling
can be influenced with taints, tolerations, and node affinity.}(hj: hj8 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj4 ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hKhj hhubeh}(h!]cpu-managementah#]h%]cpu managementah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(h*Memory and Huge Pages Resources Managementh]h*Memory and Huge Pages Resources Management}(hj_ hj] hhhNhNubah}(h!]h#]h%]h']h)]uh+hhjZ hhhh,hKubh?)}(hThe Reference Model requires the support of huge pages in i.cap.018 which is supported by upstream Kubernetes
(`documentation `__).h](hoThe Reference Model requires the support of huge pages in i.cap.018 which is supported by upstream Kubernetes
(}(hoThe Reference Model requires the support of huge pages in i.cap.018 which is supported by upstream Kubernetes
(hjk hhhNhNubh)}(h[`documentation `__h]h
documentation}(h
documentationhjt hhhNhNubah}(h!]h#]h%]h']h)]namej| hGhttps://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/uh+hhjk ubh).}(h).hjk hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhjZ hhubh?)}(hX For proper mapping of huge pages to scheduled pods, both need to have huge pages enabled in the operating system
(configured in kernel and mounted with correct permissions) and kubelet configuration. Multiple sizes of huge pages
can be enabled like 2 MiB and 1 GiB.h]hX For proper mapping of huge pages to scheduled pods, both need to have huge pages enabled in the operating system
(configured in kernel and mounted with correct permissions) and kubelet configuration. Multiple sizes of huge pages
can be enabled like 2 MiB and 1 GiB.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhjZ hhubh?)}(hX For some applications, huge pages should be allocated to account for consideration of the underlying HW topology.
`The Memory Manager `__
(added to Kubernetes v1.21 as alpha feature) enables the feature of guaranteed memory and huge pages allocation
for pods in the Guaranteed QoS class. The Memory Manager feeds the Topology Manager with hints for most suitable
NUMA affinity.h](hrFor some applications, huge pages should be allocated to account for consideration of the underlying HW topology.
}(hrFor some applications, huge pages should be allocated to account for consideration of the underlying HW topology.
hj hhhNhNubh)}(h\`The Memory Manager `__h]hThe Memory Manager}(hThe Memory Managerhj hhhNhNubah}(h!]h#]h%]h']h)]nameThe Memory ManagerhChttps://kubernetes.io/docs/tasks/administer-cluster/memory-manager/uh+hhj ubh
(added to Kubernetes v1.21 as alpha feature) enables the feature of guaranteed memory and huge pages allocation
for pods in the Guaranteed QoS class. The Memory Manager feeds the Topology Manager with hints for most suitable
NUMA affinity.}(h
(added to Kubernetes v1.21 as alpha feature) enables the feature of guaranteed memory and huge pages allocation
for pods in the Guaranteed QoS class. The Memory Manager feeds the Topology Manager with hints for most suitable
NUMA affinity.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhjZ hhubeh}(h!]*memory-and-huge-pages-resources-managementah#]h%]*memory and huge pages resources managementah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(hHardware Topology Managementh]hHardware Topology Management}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hKubh?)}(hScheduling pods across NUMA boundaries can result in lower performance and higher latencies. This would be an issue
for applications that require optimisations of CPU isolation, memory and device locality.h]hScheduling pods across NUMA boundaries can result in lower performance and higher latencies. This would be an issue
for applications that require optimisations of CPU isolation, memory and device locality.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubh?)}(hXP Kubernetes supports Topology policy per node as beta feature
(`documentation `__) and not per pod.
The Topology Manager receives Topology information from Hint Providers which identify NUMA nodes (defined as server
system architecture divisions of CPU sockets) and preferred scheduling. In the case of the pod with Guaranteed QoS class
having integer CPU requests, the static CPU Manager policy would return topology hints relating to the exclusive CPU
and the Device Manager would provide hints for the requested device.h](h>Kubernetes supports Topology policy per node as beta feature
(}(h>Kubernetes supports Topology policy per node as beta feature
(hj hhhNhNubh)}(hY`documentation `__h]h
documentation}(h
documentationhj hhhNhNubah}(h!]h#]h%]h']h)]namej hEhttps://kubernetes.io/docs/tasks/administer-cluster/topology-manager/uh+hhj ubhX ) and not per pod.
The Topology Manager receives Topology information from Hint Providers which identify NUMA nodes (defined as server
system architecture divisions of CPU sockets) and preferred scheduling. In the case of the pod with Guaranteed QoS class
having integer CPU requests, the static CPU Manager policy would return topology hints relating to the exclusive CPU
and the Device Manager would provide hints for the requested device.}(hX ) and not per pod.
The Topology Manager receives Topology information from Hint Providers which identify NUMA nodes (defined as server
system architecture divisions of CPU sockets) and preferred scheduling. In the case of the pod with Guaranteed QoS class
having integer CPU requests, the static CPU Manager policy would return topology hints relating to the exclusive CPU
and the Device Manager would provide hints for the requested device.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubh?)}(hXL If case that memory or huge pages are not considered by the Topology Manager, it can be done by the operating system
providing best-effort local page allocation for containers as long as there is sufficient free local memory on the node,
or with Control Groups (cgroups) cpuset subsystem that can isolate memory to single NUMA node.h]hXL If case that memory or huge pages are not considered by the Topology Manager, it can be done by the operating system
providing best-effort local page allocation for containers as long as there is sufficient free local memory on the node,
or with Control Groups (cgroups) cpuset subsystem that can isolate memory to single NUMA node.}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hKhj hhubeh}(h!]hardware-topology-managementah#]h%]hardware topology managementah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(hNode Feature Discoveryh]hNode Feature Discovery}(hj+ hj) hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj& hhhh,hKubh?)}(hX `Node Feature Discovery `__
(NFD) can run on every node as a daemon or as a job. NFD detects detailed hardware and software capabilities of each
node and then advertises those capabilities as node labels. Those node labels can be used in scheduling pods by using
Node Selector or Node Affinity for pods that require such capabilities.h](h)}(hs`Node Feature Discovery `__h]hNode Feature Discovery}(hNode Feature Discoveryhj; hhhNhNubah}(h!]h#]h%]h']h)]nameNode Feature DiscoveryhVhttps://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/index.htmluh+hhj7 ubhX3
(NFD) can run on every node as a daemon or as a job. NFD detects detailed hardware and software capabilities of each
node and then advertises those capabilities as node labels. Those node labels can be used in scheduling pods by using
Node Selector or Node Affinity for pods that require such capabilities.}(hX3
(NFD) can run on every node as a daemon or as a job. NFD detects detailed hardware and software capabilities of each
node and then advertises those capabilities as node labels. Those node labels can be used in scheduling pods by using
Node Selector or Node Affinity for pods that require such capabilities.hj7 hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj& hhubeh}(h!]node-feature-discoveryah#]h%]node feature discoveryah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(hDevice Plugin Frameworkh]hDevice Plugin Framework}(hje hjc hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj` hhhh,hKubh?)}(hXR `Device Plugin Framework `__
advertises device hardware resources to kubelet with which vendors can implement plugins for devices that may require
vendor-specific activation and life cycle management, and securely maps these devices to containers.h](h)}(hw`Device Plugin Framework `__h]hDevice Plugin Framework}(hDevice Plugin Frameworkhju hhhNhNubah}(h!]h#]h%]h']h)]nameDevice Plugin FrameworkhYhttps://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/uh+hhjq ubh
advertises device hardware resources to kubelet with which vendors can implement plugins for devices that may require
vendor-specific activation and life cycle management, and securely maps these devices to containers.}(h
advertises device hardware resources to kubelet with which vendors can implement plugins for devices that may require
vendor-specific activation and life cycle management, and securely maps these devices to containers.hjq hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hKhj` hhubh?)}(hOFigure 3-2 shows in four steps how device plugins operate on a Kubernetes node:h]hOFigure 3-2 shows in four steps how device plugins operate on a Kubernetes node:}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj` hhubj )}(hhh](j )}(hX0 1: During setup, the cluster administrator (more in :ref:`chapters/chapter03:operator pattern`)
knows or discovers (as per :ref:`chapters/chapter03:node feature discovery`) what kind of
devices are present on the different nodes, selects which devices to enable and deploys the associated device
plugins.h]h?)}(hX0 1: During setup, the cluster administrator (more in :ref:`chapters/chapter03:operator pattern`)
knows or discovers (as per :ref:`chapters/chapter03:node feature discovery`) what kind of
devices are present on the different nodes, selects which devices to enable and deploys the associated device
plugins.h](h41: During setup, the cluster administrator (more in }(h41: During setup, the cluster administrator (more in hj hhhNhNubhJ)}(h*:ref:`chapters/chapter03:operator pattern`h]hP)}(hj h]h#chapters/chapter03:operator pattern}(hhhj hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhj ubah}(h!]h#]h%]h']h)]refdochh refdomainj reftyperefrefexplicitrefwarnhn#chapters/chapter03:operator patternuh+hIhh,hMhj ubh)
knows or discovers (as per }(h)
knows or discovers (as per hj hhhNhNubhJ)}(h0:ref:`chapters/chapter03:node feature discovery`h]hP)}(hj h]h)chapters/chapter03:node feature discovery}(hhhj hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhj ubah}(h!]h#]h%]h']h)]refdochh refdomainj reftyperefrefexplicitrefwarnhn)chapters/chapter03:node feature discoveryuh+hIhh,hMhj ubh) what kind of
devices are present on the different nodes, selects which devices to enable and deploys the associated device
plugins.}(h) what kind of
devices are present on the different nodes, selects which devices to enable and deploys the associated device
plugins.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(h2: The plugin reports the devices it found on the node to the Kubelet device manager and starts its gRPC server
to monitor the devices.h]h?)}(h2: The plugin reports the devices it found on the node to the Kubelet device manager and starts its gRPC server
to monitor the devices.h]h2: The plugin reports the devices it found on the node to the Kubelet device manager and starts its gRPC server
to monitor the devices.}(hj hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM hj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(hc3: A user submits a pod specification (workload manifest file) requesting a certain type of device.h]h?)}(hj h]hc3: A user submits a pod specification (workload manifest file) requesting a certain type of device.}(hj hj" hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubj )}(h4: The scheduler determines a suitable node based on device availability and the local kubelet assigns a specific
device to the pod's containers.
h]h?)}(h4: The scheduler determines a suitable node based on device availability and the local kubelet assigns a specific
device to the pod's containers.h]h4: The scheduler determines a suitable node based on device availability and the local kubelet assigns a specific
device to the pod’s containers.}(hj; hj9 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj5 ubah}(h!]h#]h%]h']h)]uh+j hj hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hMhj` hhubh)}(hm.. image:: ../figures/Ch3_Figure_Device_Plugin_operation.png
:alt: "Figure 3-2: Device Plugin Operation"
h]h}(h!]h#]h%]h']h)]alt%"Figure 3-2: Device Plugin Operation"uri.figures/Ch3_Figure_Device_Plugin_operation.pngh}hj` suh+hhj` hhhh,hNubh?)}(h'**Figure 3-2:** Device Plugin Operationh](h)}(h**Figure 3-2:**h]hFigure 3-2:}(hhhjf hhhNhNubah}(h!]h#]h%]h']h)]uh+hhjb ubh Device Plugin Operation}(h Device Plugin Operationhjb hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj` hhubh?)}(hX An example of often used device plugin is the
`SR-IOV Network Device Plugin `__, that discovers
and advertises SR-IOV Virtual Functions (VFs) available on a Kubernetes node, and is used to map VFs to scheduled pods.
To use it, the SR-IOV CNI is required, as well as a CNI multiplexer plugin (such as
`Multus CNI `__ or `DANM `__),
to provision additional secondary network interfaces for VFs (beyond the primary network interface). The SR-IOV CNI
during pod creation allocates a SR-IOV VF to a pod's network namespace using the VF information given by the meta
plugin, and on pod deletion releases the VF from the pod.h](h.An example of often used device plugin is the
}(h.An example of often used device plugin is the
hj hhhNhNubh)}(hf`SR-IOV Network Device Plugin `__h]hSR-IOV Network Device Plugin}(hSR-IOV Network Device Pluginhj hhhNhNubah}(h!]h#]h%]h']h)]nameSR-IOV Network Device PluginhChttps://github.com/k8snetworkplumbingwg/sriov-network-device-pluginuh+hhj ubh, that discovers
and advertises SR-IOV Virtual Functions (VFs) available on a Kubernetes node, and is used to map VFs to scheduled pods.
To use it, the SR-IOV CNI is required, as well as a CNI multiplexer plugin (such as
}(h, that discovers
and advertises SR-IOV Virtual Functions (VFs) available on a Kubernetes node, and is used to map VFs to scheduled pods.
To use it, the SR-IOV CNI is required, as well as a CNI multiplexer plugin (such as
hj hhhNhNubh)}(hC`Multus CNI `__h]h
Multus CNI}(h
Multus CNIhj hhhNhNubah}(h!]h#]h%]h']h)]name
Multus CNIh2https://github.com/k8snetworkplumbingwg/multus-cniuh+hhj ubh or }(h or hj hhhNhNubh)}(h(`DANM `__h]hDANM}(hDANMhj hhhNhNubah}(h!]h#]h%]h']h)]namej hhttps://github.com/nokia/danmuh+hhj ubhX$ ),
to provision additional secondary network interfaces for VFs (beyond the primary network interface). The SR-IOV CNI
during pod creation allocates a SR-IOV VF to a pod’s network namespace using the VF information given by the meta
plugin, and on pod deletion releases the VF from the pod.}(hX" ),
to provision additional secondary network interfaces for VFs (beyond the primary network interface). The SR-IOV CNI
during pod creation allocates a SR-IOV VF to a pod's network namespace using the VF information given by the meta
plugin, and on pod deletion releases the VF from the pod.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj` hhubeh}(h!]device-plugin-frameworkah#]h%]device plugin frameworkah']h)]uh+h
hj hhhh,hKubh)}(hhh](h)}(hHardware Accelerationh]hHardware Acceleration}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hMubh?)}(hXX Hardware Acceleration Abstraction in RM
:ref:`ref_model:chapters/chapter03:hardware acceleration abstraction` describes types of hardware
acceleration (CPU instructions, Fixed function accelerators, Firmware-programmable adapters, SmartNICs and
SmartSwitches), and usage for Infrastructure Level Acceleration and Application Level Acceleration.h](h(Hardware Acceleration Abstraction in RM
}(h(Hardware Acceleration Abstraction in RM
hj hhhNhNubhJ)}(hE:ref:`ref_model:chapters/chapter03:hardware acceleration abstraction`h]hP)}(hj h]h>ref_model:chapters/chapter03:hardware acceleration abstraction}(hhhj hhhNhNubah}(h!]h#](h[stdstd-refeh%]h']h)]uh+hOhj ubah}(h!]h#]h%]h']h)]refdochh refdomainj reftyperefrefexplicitrefwarnhn>ref_model:chapters/chapter03:hardware acceleration abstractionuh+hIhh,hM!hj ubh describes types of hardware
acceleration (CPU instructions, Fixed function accelerators, Firmware-programmable adapters, SmartNICs and
SmartSwitches), and usage for Infrastructure Level Acceleration and Application Level Acceleration.}(h describes types of hardware
acceleration (CPU instructions, Fixed function accelerators, Firmware-programmable adapters, SmartNICs and
SmartSwitches), and usage for Infrastructure Level Acceleration and Application Level Acceleration.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hM!hj hhubh?)}(hzScheduling pods that require or prefer to run on nodes with hardware accelerators will depend on type of accelerator
used:h]hzScheduling pods that require or prefer to run on nodes with hardware accelerators will depend on type of accelerator
used:}(hj! hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM&hj hhubj )}(hhh](j )}(h9CPU instructions can be found with Node Feature Discoveryh]h?)}(hj2 h]h9CPU instructions can be found with Node Feature Discovery}(hj2 hj4 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM)hj0 ubah}(h!]h#]h%]h']h)]uh+j hj- hhhh,hNubj )}(hFixed function accelerators, Firmware-programmable network adapters and SmartNICs can be found and mapped to pods
by using Device Plugin.
h]h?)}(hFixed function accelerators, Firmware-programmable network adapters and SmartNICs can be found and mapped to pods
by using Device Plugin.h]hFixed function accelerators, Firmware-programmable network adapters and SmartNICs can be found and mapped to pods
by using Device Plugin.}(hjM hjK hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM*hjG ubah}(h!]h#]h%]h']h)]uh+j hj- hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hM)hj hhubeh}(h!]hardware-accelerationah#]h%]hardware accelerationah']h)]uh+h
hj hhhh,hMubh)}(hhh](h)}(h/Scheduling Pods with Non-resilient Applicationsh]h/Scheduling Pods with Non-resilient Applications}(hjr hjp hhhNhNubah}(h!]h#]h%]h']h)]uh+hhjm hhhh,hM.ubh?)}(hXH Non-resilient applications are sensitive to platform impairments on Compute like pausing CPU cycles (for example
because of OS scheduler) or Networking like packet drops, reordering or latencies. Such applications need to be
carefully scheduled on nodes and preferably still decoupled from infrastructure details of those nodes.h]hXH Non-resilient applications are sensitive to platform impairments on Compute like pausing CPU cycles (for example
because of OS scheduler) or Networking like packet drops, reordering or latencies. Such applications need to be
carefully scheduled on nodes and preferably still decoupled from infrastructure details of those nodes.}(hj hj~ hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM0hjm hhubjb )}(hhh]jg )}(hhh](jl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthK/uh+jk hj ubj )}(hhh]j )}(hhh](j )}(hhh]h?)}(h#h]h#}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM5hj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hIntensive onh]hIntensive on}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM5hj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hNot intensive onh]hNot intensive on}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM5hj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hUsing hardware accelerationh]hUsing hardware acceleration}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM5hj
ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(h)Requirements for optimised pod schedulingh]h)Requirements for optimised pod scheduling}(hj+
hj)
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM5hj&
ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh](j )}(hhh]h?)}(h1h]h1}(hjT
hjR
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM7hjO
ubah}(h!]h#]h%]h']h)]uh+j hjL
ubj )}(hhh]h?)}(hComputeh]hCompute}(hjk
hji
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM7hjf
ubah}(h!]h#]h%]h']h)]uh+j hjL
ubj )}(hhh]h?)}(hNetworking
(dataplane)h]hNetworking
(dataplane)}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM7hj}
ubah}(h!]h#]h%]h']h)]uh+j hjL
ubj )}(hhh]h?)}(hNoh]hNo}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM7hj
ubah}(h!]h#]h%]h']h)]uh+j hjL
ubj )}(hhh]h?)}(hCPU Managerh]hCPU Manager}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM7hj
ubah}(h!]h#]h%]h']h)]uh+j hjL
ubeh}(h!]h#]h%]h']h)]uh+j hjI
ubj )}(hhh](j )}(hhh]h?)}(h2h]h2}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM:hj
ubah}(h!]h#]h%]h']h)]uh+j hj
ubj )}(hhh]h?)}(hComputeh]hCompute}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM:hj
ubah}(h!]h#]h%]h']h)]uh+j hj
ubj )}(hhh]h?)}(hNetworking
(dataplane)h]hNetworking
(dataplane) }(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM:hj
ubah}(h!]h#]h%]h']h)]uh+j hj
ubj )}(hhh]h?)}(hCPU instructionsh]hCPU instructions}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM:hj ubah}(h!]h#]h%]h']h)]uh+j hj
ubj )}(hhh]h?)}(hCPU Manager, NFDh]hCPU Manager, NFD}(hj, hj* hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM:hj' ubah}(h!]h#]h%]h']h)]uh+j hj
ubeh}(h!]h#]h%]h']h)]uh+j hjI
ubj )}(hhh](j )}(hhh]h?)}(h3h]h3}(hjL hjJ hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM=hjG ubah}(h!]h#]h%]h']h)]uh+j hjD ubj )}(hhh]h?)}(hComputeh]hCompute}(hjc hja hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM=hj^ ubah}(h!]h#]h%]h']h)]uh+j hjD ubj )}(hhh]h?)}(hNetworking
(dataplane)h]hNetworking
(dataplane)}(hjz hjx hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM=hju ubah}(h!]h#]h%]h']h)]uh+j hjD ubj )}(hhh]h?)}(hPFixed function acceleration,
Firmware-programmable network
adapters or SmartNICsh]hPFixed function acceleration,
Firmware-programmable network
adapters or SmartNICs}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM=hj ubah}(h!]h#]h%]h']h)]uh+j hjD ubj )}(hhh]h?)}(hCPU Manager, Device Pluginh]hCPU Manager, Device Plugin}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hM=hj ubah}(h!]h#]h%]h']h)]uh+j hjD ubeh}(h!]h#]h%]h']h)]uh+j hjI
ubj )}(hhh](j )}(hhh]h?)}(h4h]h4}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMAhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hNetworking
(dataplane)h]hNetworking
(dataplane)}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMAhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hXNo, or Fixed function
acceleration, Firmware-
programmable network adapters
or SmartNICsh]hXNo, or Fixed function
acceleration, Firmware-
programmable network adapters
or SmartNICs}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMAhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hHuge pages (for DPDK-based applications); CPU
Manager with configuration for isolcpus and
SMT; Multiple interfaces; NUMA topology;
Device Pluginh]hHuge pages (for DPDK-based applications); CPU
Manager with configuration for isolcpus and
SMT; Multiple interfaces; NUMA topology;
Device Plugin}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMAhj ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hjI
ubj )}(hhh](j )}(hhh]h?)}(h5h]h5}(hj6 hj4 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMFhj1 ubah}(h!]h#]h%]h']h)]uh+j hj. ubj )}(hhh]h?)}(hNetworking
(dataplane)h]hNetworking
(dataplane)}(hjM hjK hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMFhjH ubah}(h!]h#]h%]h']h)]uh+j hj. ubj )}(hhh]h}(h!]h#]h%]h']h)]uh+j hj. ubj )}(hhh]h?)}(hCPU instructionsh]hCPU instructions}(hjm hjk hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMFhjh ubah}(h!]h#]h%]h']h)]uh+j hj. ubj )}(hhh]h?)}(hHuge pages (for DPDK-based applications); CPU
Manager with configuration for isolcpus and
SMT; Multiple interfaces; NUMA topology;
Device Plugin; NFDh]hHuge pages (for DPDK-based applications); CPU
Manager with configuration for isolcpus and
SMT; Multiple interfaces; NUMA topology;
Device Plugin; NFD}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMFhj ubah}(h!]h#]h%]h']h)]uh+j hj. ubeh}(h!]h#]h%]h']h)]uh+j hjI
ubeh}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]colsKuh+jf hj ubah}(h!]h#]h%]h']h)]uh+ja hjm hhhh,hNubh?)}(hc**Table 3-1:** Categories of applications, requirements for scheduling pods and Kubernetes featuresh](h)}(h**Table 3-1:**h]h
Table 3-1:}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj ubhU Categories of applications, requirements for scheduling pods and Kubernetes features}(hU Categories of applications, requirements for scheduling pods and Kubernetes featureshj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMLhjm hhubeh}(h!]/scheduling-pods-with-non-resilient-applicationsah#]h%]/scheduling pods with non-resilient applicationsah']h)]uh+h
hj hhhh,hM.ubh)}(hhh](h)}(hVirtual Machine based Clustersh]hVirtual Machine based Clusters}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj hhhh,hMOubh?)}(hKubernetes clusters using above enhancements can implement worker nodes with "bare metal" servers (running Container
Runtime in Linux host Operating System) or with virtual machines (VMs, on hypervisor).h]hKubernetes clusters using above enhancements can implement worker nodes with “bare metal” servers (running Container
Runtime in Linux host Operating System) or with virtual machines (VMs, on hypervisor).}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMQhj hhubh?)}(hnWhen running in VMs, the following list of configurations shows what is needed for non-resilient applications:h]hnWhen running in VMs, the following list of configurations shows what is needed for non-resilient applications:}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMThj hhubj )}(hhh](j )}(h;CPU Manager managing vCPUs that hypervisor provides to VMs.h]h?)}(hj
h]h;CPU Manager managing vCPUs that hypervisor provides to VMs.}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMVhj
ubah}(h!]h#]h%]h']h)]uh+j hj
hhhh,hNubj )}(hWHuge pages enabled in hypervisor, mapped to VM, enabled in guest OS, and mapped to pod.h]h?)}(hj
h]hWHuge pages enabled in hypervisor, mapped to VM, enabled in guest OS, and mapped to pod.}(hj
hj
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMWhj
ubah}(h!]h#]h%]h']h)]uh+j hj
hhhh,hNubj )}(hHardware Topology Management with NUMA enabled in hypervisor, mapped into VM, if needed enabled in guest OS, and
mapped into pod.h]h?)}(hHardware Topology Management with NUMA enabled in hypervisor, mapped into VM, if needed enabled in guest OS, and
mapped into pod.h]hHardware Topology Management with NUMA enabled in hypervisor, mapped into VM, if needed enabled in guest OS, and
mapped into pod.}(hj8
hj6
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMXhj2
ubah}(h!]h#]h%]h']h)]uh+j hj
hhhh,hNubj )}(hX If Node Feature Discovery and Device Plugin Framework are required, the required CPU instructions must be enabled
in the VM virtual hardware, and the required devices must be virtualised in the hypervisor or passed through to
the Node VM, and mapped into the pods.
h]h?)}(hX If Node Feature Discovery and Device Plugin Framework are required, the required CPU instructions must be enabled
in the VM virtual hardware, and the required devices must be virtualised in the hypervisor or passed through to
the Node VM, and mapped into the pods.h]hX If Node Feature Discovery and Device Plugin Framework are required, the required CPU instructions must be enabled
in the VM virtual hardware, and the required devices must be virtualised in the hypervisor or passed through to
the Node VM, and mapped into the pods.}(hjP
hjN
hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMZhjJ
ubah}(h!]h#]h%]h']h)]uh+j hj
hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hMVhj hhubeh}(h!]virtual-machine-based-clustersah#]h%]virtual machine based clustersah']h)]uh+h
hj hhhh,hMOubeh}(h!]container-compute-servicesah#]h%]container compute servicesah']h)]uh+h
hj hhhh,hK@ubh)}(hhh](h)}(hContainer Networking Servicesh]hContainer Networking Services}(hj}
hj{
hhhNhNubah}(h!]h#]h%]h']h)]uh+hhjx
hhhh,hM_ubh?)}(hX' Kubernetes considers networking as a key component, with a number of distinct
solutions. By default, Kubernetes networking is considered an "extension" to the
core functionality, and is managed through the use of `Network
Plugins `__,
which can be categorised based on the topology of the networks they manage, and
the integration with the switching (e.g. vlan vs tunnels) and routing (e.g.
virtual vs physical gateways) infrastructure outside of the Cluster:h](hKubernetes considers networking as a key component, with a number of distinct
solutions. By default, Kubernetes networking is considered an “extension” to the
core functionality, and is managed through the use of }(hKubernetes considers networking as a key component, with a number of distinct
solutions. By default, Kubernetes networking is considered an "extension" to the
core functionality, and is managed through the use of hj
hhhNhNubh)}(hp`Network
Plugins `__h]hNetwork
Plugins}(hNetwork
Pluginshj
hhhNhNubah}(h!]h#]h%]h']h)]nameNetwork PluginshZhttps://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/uh+hhj
ubh,
which can be categorised based on the topology of the networks they manage, and
the integration with the switching (e.g. vlan vs tunnels) and routing (e.g.
virtual vs physical gateways) infrastructure outside of the Cluster:}(h,
which can be categorised based on the topology of the networks they manage, and
the integration with the switching (e.g. vlan vs tunnels) and routing (e.g.
virtual vs physical gateways) infrastructure outside of the Cluster:hj
hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMahjx
hhubj )}(hhh](j )}(hX7 **Layer 2 underlay** plugins provide east/west ethernet connectivity between
pods and north/south connectivity between pods and external networks by using
the network underlay (eg VLANs on DC switches). When using the underlay for
layer 2 segments, configuration is required on the DC network for every network.h]h?)}(hX7 **Layer 2 underlay** plugins provide east/west ethernet connectivity between
pods and north/south connectivity between pods and external networks by using
the network underlay (eg VLANs on DC switches). When using the underlay for
layer 2 segments, configuration is required on the DC network for every network.h](h)}(h**Layer 2 underlay**h]hLayer 2 underlay}(hhhj
hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj
ubhX# plugins provide east/west ethernet connectivity between
pods and north/south connectivity between pods and external networks by using
the network underlay (eg VLANs on DC switches). When using the underlay for
layer 2 segments, configuration is required on the DC network for every network.}(hX# plugins provide east/west ethernet connectivity between
pods and north/south connectivity between pods and external networks by using
the network underlay (eg VLANs on DC switches). When using the underlay for
layer 2 segments, configuration is required on the DC network for every network.hj
hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMihj
ubah}(h!]h#]h%]h']h)]uh+j hj
hhhh,hNubj )}(hX **Layer 2 overlay** plugins provide east/west pod-to-pod connectivity by creating
overlay tunnels (eg VXLAN/GENEVE tunnels) between the nodes, without requiring
creation of per-application layer 2 segments on the underlay. North-south
connectivity cannot be provided.h]h?)}(hX **Layer 2 overlay** plugins provide east/west pod-to-pod connectivity by creating
overlay tunnels (eg VXLAN/GENEVE tunnels) between the nodes, without requiring
creation of per-application layer 2 segments on the underlay. North-south
connectivity cannot be provided.h](h)}(h**Layer 2 overlay**h]hLayer 2 overlay}(hhhj
hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj
ubh plugins provide east/west pod-to-pod connectivity by creating
overlay tunnels (eg VXLAN/GENEVE tunnels) between the nodes, without requiring
creation of per-application layer 2 segments on the underlay. North-south
connectivity cannot be provided.}(h plugins provide east/west pod-to-pod connectivity by creating
overlay tunnels (eg VXLAN/GENEVE tunnels) between the nodes, without requiring
creation of per-application layer 2 segments on the underlay. North-south
connectivity cannot be provided.hj
hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMmhj
ubah}(h!]h#]h%]h']h)]uh+j hj
hhhh,hNubj )}(hXL **Layer 3** plugins create a virtual router (eg BPF, iptables, kubeproxy) in
each node, and can route traffic between multiple layer 2 overlays via them.
North-south traffic is managed by peering (eg with BGP) virtual routers on the
nodes with the DC network underlay, allowing each pod or service IP to be
announced independently.
h]h?)}(hXK **Layer 3** plugins create a virtual router (eg BPF, iptables, kubeproxy) in
each node, and can route traffic between multiple layer 2 overlays via them.
North-south traffic is managed by peering (eg with BGP) virtual routers on the
nodes with the DC network underlay, allowing each pod or service IP to be
announced independently.h](h)}(h**Layer 3**h]hLayer 3}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj ubhX@ plugins create a virtual router (eg BPF, iptables, kubeproxy) in
each node, and can route traffic between multiple layer 2 overlays via them.
North-south traffic is managed by peering (eg with BGP) virtual routers on the
nodes with the DC network underlay, allowing each pod or service IP to be
announced independently.}(hX@ plugins create a virtual router (eg BPF, iptables, kubeproxy) in
each node, and can route traffic between multiple layer 2 overlays via them.
North-south traffic is managed by peering (eg with BGP) virtual routers on the
nodes with the DC network underlay, allowing each pod or service IP to be
announced independently.hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMqhj ubah}(h!]h#]h%]h']h)]uh+j hj
hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hMihjx
hhubh?)}(hX However, for more complex requirements such as providing connectivity through
acceleration hardware, there are three approaches that can be taken, with Table 3-1
showing some of the differences between networking solutions that consist of
these options. It is important to note that different networking solutions require
different descriptors from the Kubernetes workloads (specifically, the deployment
artefacts such as YAML files, etc.), therefore the networking solution should be
agreed between the CNF vendors and the CNF operators:h]hX However, for more complex requirements such as providing connectivity through
acceleration hardware, there are three approaches that can be taken, with Table 3-1
showing some of the differences between networking solutions that consist of
these options. It is important to note that different networking solutions require
different descriptors from the Kubernetes workloads (specifically, the deployment
artefacts such as YAML files, etc.), therefore the networking solution should be
agreed between the CNF vendors and the CNF operators:}(hj/ hj- hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMwhjx
hhubj )}(hhh](j )}(hThe **Default CNI Plugin** through the use of deployment specific configuration (e.g. `Tungsten Fabric
`__)h]h?)}(hThe **Default CNI Plugin** through the use of deployment specific configuration (e.g. `Tungsten Fabric
`__)h](hThe }(hThe hjB hhhNhNubh)}(h**Default CNI Plugin**h]hDefault CNI Plugin}(hhhjK hhhNhNubah}(h!]h#]h%]h']h)]uh+hhjB ubh< through the use of deployment specific configuration (e.g. }(h< through the use of deployment specific configuration (e.g. hjB hhhNhNubh)}(h{`Tungsten Fabric
`__h]hTungsten Fabric}(hTungsten Fabrichj^ hhhNhNubah}(h!]h#]h%]h']h)]nameTungsten Fabrichehttps://tungstenfabric.github.io/website/Tungsten-Fabric-Architecture.html#vrouter-deployment-optionsuh+hhjB ubh)}(h)hjB hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj> ubah}(h!]h#]h%]h']h)]uh+j hj; hhhh,hNubj )}(hXl A **multiplexer/meta-plugin** that integrates with the Kubernetes control plane
via CNI (Container Network Interface) and allows for use of multiple CNI plugins
in order to provide this specific connectivity that the default Network Plugin may
not be able to provide (e.g. `Multus `__,
`DANM `__)h]h?)}(hXl A **multiplexer/meta-plugin** that integrates with the Kubernetes control plane
via CNI (Container Network Interface) and allows for use of multiple CNI plugins
in order to provide this specific connectivity that the default Network Plugin may
not be able to provide (e.g. `Multus `__,
`DANM `__)h](hA }(hA hj hhhNhNubh)}(h**multiplexer/meta-plugin**h]hmultiplexer/meta-plugin}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj ubh that integrates with the Kubernetes control plane
via CNI (Container Network Interface) and allows for use of multiple CNI plugins
in order to provide this specific connectivity that the default Network Plugin may
not be able to provide (e.g. }(h that integrates with the Kubernetes control plane
via CNI (Container Network Interface) and allows for use of multiple CNI plugins
in order to provide this specific connectivity that the default Network Plugin may
not be able to provide (e.g. hj hhhNhNubh)}(h0`Multus `__h]hMultus}(hMultushj hhhNhNubah}(h!]h#]h%]h']h)]namej h#https://github.com/intel/multus-cniuh+hhj ubh,
}(h,
hj hhhNhNubh)}(h(`DANM `__h]hDANM}(hDANMhj hhhNhNubah}(h!]h#]h%]h']h)]namej hhttps://github.com/nokia/danmuh+hhj ubh)}(hjt hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj; hhhh,hNubj )}(hAn external, **federated networking manager** that uses the Kubernetes API Server
to create and manage additional connections for Pods (e.g. `Network Service
Mesh `__)
h]h?)}(hAn external, **federated networking manager** that uses the Kubernetes API Server
to create and manage additional connections for Pods (e.g. `Network Service
Mesh `__)h](h
An external, }(h
An external, hj hhhNhNubh)}(h **federated networking manager**h]hfederated networking manager}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+hhj ubh` that uses the Kubernetes API Server
to create and manage additional connections for Pods (e.g. }(h` that uses the Kubernetes API Server
to create and manage additional connections for Pods (e.g. hj hhhNhNubh)}(h8`Network Service
Mesh `__h]hNetwork Service
Mesh}(hNetwork Service
Meshhj hhhNhNubah}(h!]h#]h%]h']h)]nameNetwork Service Meshhhttps://networkservicemesh.iouh+hhj ubh)}(hjt hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj; hhhh,hNubeh}(h!]h#]h%]h']h)]j2 j3 uh+j hh,hMhjx
hhubjb )}(hhh]jg )}(hhh](jl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj# ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj# ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj# ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj# ubjl )}(hhh]h}(h!]h#]h%]h']h)]colwidthKuh+jk hj# ubj )}(hhh]j )}(hhh](j )}(hhh]h?)}(hRequirementh]hRequirement}(hjc hja hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj^ ubah}(h!]h#]h%]h']h)]uh+j hj[ ubj )}(hhh]h?)}(hNetworking Solution
with Multush]hNetworking Solution
with Multus}(hjz hjx hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhju ubah}(h!]h#]h%]h']h)]uh+j hj[ ubj )}(hhh]h?)}(hNetworking Solution
with DANMh]hNetworking Solution
with DANM}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj[ ubj )}(hhh]h?)}(h(Networking Solution
with Tungsten Fabrich]h(Networking Solution
with Tungsten Fabric}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj[ ubj )}(hhh]h?)}(hNetworking Solution
with NSMh]hNetworking Solution
with NSM}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj[ ubeh}(h!]h#]h%]h']h)]uh+j hjX ubah}(h!]h#]h%]h']h)]uh+j hj# ubj )}(hhh](j )}(hhh](j )}(hhh]h?)}(h'Additional network
connections providerh]h'Additional network
connections provider}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hMultiplexer/meta-
pluginh]hMultiplexer/meta-
plugin}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hMultiplexer/meta-
pluginh]hMultiplexer/meta-
plugin}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hFederated networking
managerh]hFederated networking
manager}(hj- hj+ hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj( ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hDefault CNI Pluginh]hDefault CNI Plugin}(hjD hjB hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj? ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hhThe overlay network
encapsulation
protocol needs to
enable ECMP in the
underlay (``infra.
net.cfg.002``)h](hQThe overlay network
encapsulation
protocol needs to
enable ECMP in the
underlay (}(hQThe overlay network
encapsulation
protocol needs to
enable ECMP in the
underlay (hjb hhhNhNubj )}(h``infra.
net.cfg.002``h]hinfra.
net.cfg.002}(hhhjk hhhNhNubah}(h!]h#]h%]h']h)]uh+j hjb ubh)}(hjt hjb hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj_ ubah}(h!]h#]h%]h']h)]uh+j hj\ ubj )}(hhh]h?)}(h'Supported via the
additional CNI
pluginh]h'Supported via the
additional CNI
plugin}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj\ ubj )}(hhh]h?)}(h'Supported via the
additional CNI
pluginh]h'Supported via the
additional CNI
plugin}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj\ ubj )}(hhh]h?)}(h Supportedh]h Supported}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj\ ubj )}(hhh]h?)}(hTBCh]hTBC}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj\ ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(hNAT (``infra.net.
cfg.003``)h](hNAT (}(hNAT (hj hhhNhNubj )}(h``infra.net.
cfg.003``h]hinfra.net.
cfg.003}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+j hj ubh)}(hjt hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(h'Supported via the
additional CNI
pluginh]h'Supported via the
additional CNI
plugin}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(h Supportedh]h Supported}(hj4 hj2 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj/ ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(h Supportedh]h Supported}(hjK hjI hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhjF ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hTBCh]hTBC}(hjb hj` hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj] ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(h;Network Policies
(Security Groups)
(``infra.net.cfg.
004``)h](h$Network Policies
(Security Groups)
(}(h$Network Policies
(Security Groups)
(hj hhhNhNubj )}(h``infra.net.cfg.
004``h]hinfra.net.cfg.
004}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+j hj ubh)}(hjt hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj} ubah}(h!]h#]h%]h']h)]uh+j hjz ubj )}(hhh]h?)}(hASupported via a CNI
Network Plugin that
supports Network
Policiesh]hASupported via a CNI
Network Plugin that
supports Network
Policies}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hjz ubj )}(hhh]h?)}(hASupported via a CNI
Network Plugin that
supports Network
Policiesh]hASupported via a CNI
Network Plugin that
supports Network
Policies}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hjz ubj )}(hhh]h?)}(hASupported via a CNI
Network Plugin that
supports Network
Policiesh]hASupported via a CNI
Network Plugin that
supports Network
Policies}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hjz ubj )}(hhh]h?)}(hASupported via a CNI
Network Plugin that
supports Network Policiesh]hASupported via a CNI
Network Plugin that
supports Network Policies}(hj hj hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hjz ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(h2Traffic patterns
symmetry (``infra.
net.cfg.006``)h](hTraffic patterns
symmetry (}(hTraffic patterns
symmetry (hj hhhNhNubj )}(h``infra.
net.cfg.006``h]hinfra.
net.cfg.006}(hhhj hhhNhNubah}(h!]h#]h%]h']h)]uh+j hj ubh)}(hjt hj hhhNhNubeh}(h!]h#]h%]h']h)]uh+h>hh,hMhj ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hDepends on CNI
plugin usedh]hDepends on CNI
plugin used}(hj; hj9 hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj6 ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hDepends on CNI
plugin usedh]hDepends on CNI
plugin used}(hjR hjP hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhjM ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hDepends on CNI
plugin usedh]hDepends on CNI
plugin used}(hji hjg hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhjd ubah}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh]h?)}(hDepends on CNI plugin
usedh]hDepends on CNI plugin
used}(hj hj~ hhhNhNubah}(h!]h#]h%]h']h)]uh+h>hh,hMhj{ ubah}(h!]h#]h%]h']h)]uh+j hj ubeh}(h!]h#]h%]h']h)]uh+j hj ubj )}(hhh](j )}(hhh]h?)}(h