Oracle Kubernetes Engine (OKE)
Oracle Kubernetes Engine is a fully managed Kubernetes service offered by Oracle Cloud. OKE enables organizations to deploy, scale, and manage containerized applications on the Oracle Cloud Infrastructure.
Kubernetes is an open-source container orchestration platform that enables developers to deploy, scale, and manage containerized applications in a cluster of machines. OKE simplifies the utilization of Kubernetes on the Oracle Cloud Infrastructure by offering a fully managed service that handles the underlying infrastructure and maintenance tasks. By using OKE, developers can deploy containerized applications with Kubernetes, and the platform will automatically scale and manage the applications based on demand. Additionally, OKE provides features like load balancing, monitoring, and logging to assist developers in managing and optimizing their applications.
Containers are a method of packaging and distributing software applications, along with all their dependencies and libraries, in a portable and self-contained manner. They allow developers to quickly create and deploy applications without worrying about the underlying infrastructure, specific operating systems, and dependencies needed to run the application. Using containerization technology, containers create a lightweight, standalone, executable package that includes everything an application needs to run, such as the application code, system tools, libraries, and runtime. This makes it easy to deploy applications in any environment, whether it’s on a local machine, in a cloud environment, or on-premises. Containers are often used alongside container orchestration platforms like Kubernetes, which enable developers to manage and deploy large numbers of containers at scale. The most widely used container today is Docker.
Docker
Docker is a containerization platform that enables developers to package and distribute software applications in a portable and self-contained way. Docker uses containers to create lightweight, standalone, executable packages that include everything an application needs to run, including the application code, system tools, libraries, and runtime. Docker allows developers to build and deploy applications quickly and easily, without having to worry about the underlying infrastructure or the specific operating system and dependencies required to run the application. This makes it easy to deploy applications in any environment, whether it’s on a local machine, in a cloud environment, or on-premises.
Key Terms
There are several key technologies that you need to first understand:
Serverless Kubernetes with Virtual Nodes: Virtual nodes offer a serverless Kubernetes experience for running containerized applications at scale, without the need to spend extra resources on managing, scaling, upgrading, and troubleshooting cluster infrastructure.
With virtual nodes, Kubernetes sees these nodes as regular ones, allowing for precise pod scaling with per-pod pricing. This means you can scale your deployments without having to consider the cluster’s capacity, making it easier to handle scalable workloads like high-traffic web applications and data processing jobs.
Managed Nodes: Managed nodes are worker nodes that are created within a customer’s tenancy and operated with shared responsibility between OKE and the customer. Customers can define the desired specifications for their worker node pools, and OKE streamlines the provisioning of these nodes. OKE offers features to automate and simplify key ongoing operations for these worker nodes, including on-demand cycling to automate updating worker nodes, self-healing of worker nodes upon detection of failure, autoscaling, and more. Managed nodes are suitable for customers who require worker nodes with configurations or compute shapes that are not supported by virtual nodes.
Self-Managed Nodes: Self-managed nodes in OKE provide additional customization and control for running containerized workloads that need unique compute configurations or advanced setup across the stack not supported by managed nodes. This allows customers to utilize specialized infrastructure options such as RDMA-enabled bare metal HPC/GPU, confidential computing, or other specialized use cases. While customers still benefit from a managed control plane, they are responsible for managing the worker nodes, including Kubernetes upgrades and OS patching.
Control Plane Nodes: These nodes were previously referred to as master nodes. This is where the scheduler, manager, and the API server run. For redundancy, three nodes are often used.
Worker Nodes: This is where the containers run. They communicate to the control nodes for management.
K8s: This is a common abbreviation for Kubernetes. The 8 represents the numbers of characters between the k and the s!
Managed Cluster: This system utilizes specialized systems to oversee the K8s cluster. While it provides greater control, it comes with a significant cost. Managed nodes are OCI Compute instances that run in your tenancy and can be controlled and configured with shared operational responsibility.
Virtual Cluster: This cluster utilizes VMS for the nodes, which helps reduce costs. Virtual nodes offer precise, pod-level elasticity and pay-per-use pricing. This allows you to scale deployments without worrying about the cluster’s capacity, making it easier to handle scalable workloads like high-traffic web applications and data processing jobs. Resources are allocated at the Pod level.
Authentication and Authorization: You can control access and permissions using native OCI identity and access management (IAM), Oracle Identity Cloud Service, and Kubernetes role-based access control. You can also configure OCI IAM multifactor authentication. Workload Identity allows you to establish secure authentication at the pod level for OCI APIs and services. By following a zero-trust approach, you can ensure that users have access only to necessary resources. This helps enhance your security by reducing the potential for security breaches or unauthorized access.
Compliance: Compliance starts with clusters that already have industry-standard regulatory frameworks approved, such as FedRAMP High, HIPAA, PCI, and SOC 2.
Container Image Scanning: OKE supports container image scanning. This capability allows you to ensure that your application images are free of serious security vulnerabilities and that the integrity of the container images is preserved when deployed by enforcing image signing. You can easily scan for known common vulnerabilities and exposures (CVE).
Encryption: Oracle encrypts block volumes, boot volumes, and volume backups at rest using the Advanced Encryption Standard (AES) algorithm with 256-bit encryption.
Strong Isolation at the Pod Level: Virtual nodes provide strong isolation for each Kubernetes pod. Pods do not share any underlying kernel, memory, or CPU resources. This pod-level isolation enables you or your organization to run untrusted workloads, multitenant applications, and sensitive data.
A Kubernetes cluster is a collection of nodes, which are machines running applications. Nodes can be either physical machines or virtual machines, and their capacity in terms of the number of CPUs and amount of memory is defined at the time of their creation. Typically, a cluster consists of three control nodes and enough worker nodes to handle the workload. There are two types of clusters:
Enhanced Clusters: Enhanced clusters support all available features, including features not supported by basic clusters (such as virtual nodes, cluster add-on management, workload identity, and additional worker nodes per cluster). Enhanced clusters come with a service-level agreement (SLA).
Basic Clusters: Basic clusters offer all the essential functionality provided by Kubernetes and Container Engine for Kubernetes, but they do not include the advanced features of Container Engine for Kubernetes. Basic clusters have a service-level objective (SLO) but do not come with a service-level agreement.
Creating a cluster is quick and easy to do. Navigate to Developer Services > Kubernetes Clusters (OKE) (see Figure 4-21). From here, you can see any existing clusters in the compartment and create a new cluster.
Figure 4-21 K8s Clusters
Next, select Create Cluster to start the dialog process shown in Figure 4-22.
For most new clusters, you should use the Quick Create approach. This option will correctly create all new resources for the cluster to isolate it from existing workloads. For this sample, let’s create a managed cluster.
The cluster will have a public endpoint, allowing access to manage the cluster from the Internet, but the workers will be on a private subnet. This is seen in the first part of the creation dialog shown in Figure 4-23.
Figure 4-22 K8s Creation Dialog Box
Figure 4-23 K8s Creation Part 1
Next, set the initial node configuration. For the sample, three nodes, each with one OCPU and 9 GB of RAM, will be used. Each uses Oracle Linux 8 with K8s 1.30.1 You can see how the shape is set in Figure 4-24.
Figure 4-24 K8s Creation Part 2
Click Next to continue to the review page. From here, you can review the settings, shown in Figure 4-25.
Figure 4-25 K8s Review
Click Create Cluster to continue. The system will then create all of the dependency resources and the K8s cluster. When you return to the list of clusters in the compartment, you should see the cluster now, as shown in Figure 4-26.
Figure 4-26 K8s Cluster Created
From here, you can manage the cluster by using the kubectl command, just like you would for any other K8s cluster!
