A cloud native brew with Oracle Database, Helidon and Kubernetes — Part 3 — GitOps Setup

Ali Mukadam
8 min readMar 25, 2024

--

In my previous post in this series, I wrote about using the ora-operator, a Kubernetes operator that helps you manage the lifecycle of your Oracle Databases running on OCI or on Kubernetes. In this article, we’ll continue our journey with ora-operator and use GitOps principles and tools to automate it all.

One of the operational advantages of using Autonomous DB is that Oracle manages this for you. There are no VMs for you to manage, no OS to upgrade etc. But it does run somewhere right? Yup, it runs on what we call the Oracle Service Network (OSN), a conceptual network in Oracle Cloud that runs OCI services and the underlying infrastructure for you and you can access them via a public or private endpoint and OCI Autonomous Database is one of these services that runs on the OSN. If you want to access these services without the traffic going to the internet, you can use either the service gateway or a private endpoint. A private endpoint basically just creates a VNIC with a routable private IP address in a designated subnet in your VCN along with a VCN-visible DNS entry. You can read more about OSN in my colleague’s Troy Levin’s excellent blog article.

Using GitOps tools, we will:

  1. deploy an Autonomous Database using the ora-operator
  2. deploy a workload cluster using Cluster API (CAPI)
  3. install ora-operator in our workload cluster created in (2).
  4. bind the ADB instance created in (1) to the workload cluster created in (2).
  5. deploy a micro-service Helidon application in the workload cluster and configure it to use Autonomous Database
  6. Test local failover of Autonomous Database

The diagram below illusstrates what we’ll try to achieve:

GitOps with Cluster API, ora-operator, Oracle Autonomous Database and Helidon

To automate it all, we’ll use Argo CD. I’ve previous written about these like ArgoCD and Cluster API before so I won’t repeat them here other than the installation steps so you don’t need to go and dig and update versions as well as a couple of improvement to set up etc. However, it wouldn’t harm for you to go through a refresher.

In this article, we’ll focus on the setup required. It might look longer than it actuall is but that’s because you are getting my commentary.

Install ora-operator in management cluster

Provision a minimal OKE cluster (2–3) nodes to act like the management cluster and install-ora-operator in it:

helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.crds.yaml
helm install cert-manager --namespace cert-manager --version v1.14.4 jetstack/cert-manager --create-namespace
kubectl apply -f https://raw.githubusercontent.com/oracle/oracle-database-operator/main/oracle-database-operator.yaml

This will help us deploy Oracle Autonomous Database or any other flavour of Oracle databases that it can handle. In this article, instead of using key-based authentication, we’ll use instance-principal and tag namespaces to define the rules for dynamic group membership. In my case, I have a tag called ‘cn’ and a key ‘ora’.

Create a dynamic group with the following rule:

tag.cn.ora.value='ora-operator'

Ensure the worker nodes in your hub cluster have the the defined tag for cn.ora set with value ‘ora-operator’:

Finally, create a policy and add the following policy statement to allow the dynamic group to manage ADBs:

Allow dynamic-group oracle-operator to manage autonomous-database-family in compartment <replace_me>

Install Argo CD in management cluster

Next, install Argo CD:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Obtain the Argo CD admin user initial password:

kubectl -n argocd get secret argocd-initial-admin-secret -o json | jq -r .data.password | base64 -d

Then port-forward to the Argo CD UI:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Test you can login into the Argo CD UI.

Next, patch the service type to load balancer:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

Download the latest release of Argo CD cli from https://github.com/argoproj/argo-cd/releases and then login with the IP address of the service e.g. :

argocd login 1.2.3.4 # replace IP address

Login with the username and password.

Install Cluster API on an Enhanced OKE Cluster

Next, install Cluster API for OCI:

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-linux-amd64 -o clusterctl
chmod +x clusterctl
sudo mv clusterctl /usr/local/bin

If you are using a basic OKE cluster, export the necessary environment variables now:

export OCI_TENANCY_ID=<insert-tenancy-id-here>
export OCI_USER_ID=<insert-user-ocid-here>
export OCI_CREDENTIALS_FINGERPRINT=<insert-fingerprint-here>
export OCI_REGION=<insert-region-here>
export OCI_TENANCY_ID_B64="$(echo -n "$OCI_TENANCY_ID" | base64 | tr -d '\n')"
export OCI_CREDENTIALS_FINGERPRINT_B64="$(echo -n "$OCI_CREDENTIALS_FINGERPRINT" | base64 | tr -d '\n')"
export OCI_USER_ID_B64="$(echo -n "$OCI_USER_ID" | base64 | tr -d '\n')"
export OCI_REGION_B64="$(echo -n "$OCI_REGION" | base64 | tr -d '\n')"
export OCI_CREDENTIALS_KEY_B64=$(base64 < <insert-path-to-api-private-key-file-here> | tr -d '\n')
# if Passphrase is present
export OCI_CREDENTIALS_PASSPHRASE=<insert-passphrase-here>
export OCI_CREDENTIALS_PASSPHRASE_B64="$(echo -n "$OCI_CREDENTIALS_PASSPHRASE" | base64 | tr -d '\n')"

Initialize the management cluster and the helm addon:

export EXP_MACHINE_POOL=true
clusterctl init --infrastructure oci -n capi-system
clusterctl init --addon helm -n capi-system

Create a manifest to hold the authentication information:

apiVersion: v1
kind: Secret
metadata:
name: capi-oke-credentials
namespace: capi-system
type: Opaque
data:
tenancy: ${OCI_TENANCY_ID_B64}
user: ${OCI_USER_ID_B64}
region: ${OCI_REGION_B64}
key: ${OCI_CREDENTIALS_KEY_B64}
fingerprint: ${OCI_CREDENTIALS_FINGERPRINT_B64}
---
nano
kind: OCIClusterIdentity
metadata:
name: cluster-identity
namespace: capi-system
spec:
type: UserPrincipal
principalSecret:
name: capi-oke-credentials
namespace: capi-system
allowedNamespaces: {}

You can then create a Secret and an OCIClusterIdentity that Cluster API will use:

envsubst < capi-secret.yaml | kubectl apply -f -

If instead you are using an enhanced cluster, then you’ll be glad to know that Cluster API for OCI now supports OKE Workload Identity too. Let’s install it:

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-linux-amd64 -o clusterctl
chmod +x clusterctl
sudo mv clusterctl /usr/local/bin
export EXP_MACHINE_POOL=true
export INIT_OCI_CLIENTS_ON_STARTUP=false
clusterctl init --infrastructure oci -n capi-system
clusterctl init --addon helm -n capi-system

Now, all you then need to do is create a OCIClusterIdentity:

apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: OCIClusterIdentity
metadata:
name: cluster-identity
namespace: capi-system
spec:
type: Workload
allowedNamespaces: {}

And the corresponding policy in OCI (make sure you replace the compartment name):

Allow any-user to manage virtual-network-family in compartment <replace me> where all { request.principal.type = 'workload', request.principal.namespace = 'capi-system', request.principal.service_account = 'capoci-controller-manager'} 
Allow any-user to manage cluster-family in compartment <replace me> where all { request.principal.type = 'workload', request.principal.namespace = 'capi-system', request.principal.service_account = 'capoci-controller-manager'}
Allow any-user to manage volume-family in compartment <replace me> where all { request.principal.type = 'workload', request.principal.namespace = 'capi-system', request.principal.service_account = 'capoci-controller-manager'}
Allow any-user to inspect compartments in compartment <replace me> where all { request.principal.type = 'workload', request.principal.namespace = 'capi-system', request.principal.service_account = 'capoci-controller-manager'}
Allow any-user to manage instance-family in compartment <replace me> where all { request.principal.type = 'workload', request.principal.namespace = 'capi-system', request.principal.service_account = 'capoci-controller-manager'}

We can now use the hub cluster as a control plane and deploy OKE & OCNE clusters using Argo CD applications.

Deploying the infrastructure

1 of the outcomes we want to achieve is to be able to quickly recreate the entire deployment using GitOps. In the example below, since we are using the Autonomous Database and we are using a private endpoint, this requires we place the latter in a private subnet with the necessary NSG. We could have created the networking infrastructure first and then use ora-operator to create ADB and its endpoint:

This pattern is particularly useful in the following scenarios:

  1. a development environment and where you might want to give each dev team their own isolated Kubernetes clusters
  2. a prod environment where applications must be deployed in several clusters e.g. to meet compliance or reduce blast radius but they all use the same database(s) or they are dependent on each other
  3. you have some other constraints and you need to run all your clusters in the same VCN

However, my take is that this would make the ADB too tightly coupled with that application’s infrastructure. Instead, we’ll create a dedicated infrastructure for accessing our data:

GitOps with Cluster API, ora-operator, Oracle Autonomous Database and Helidon

In this dedicated layer, we can also deploy other OCI services that our application may use and are accessible via private endpoints too e.g. OCI Streaming. Let’s create it.

Deploying a dedicated infrastructure to access our data

Create the following separate VCN, along with a route table, a private subnet and NSG. You can either use the terraform-oci-vcn module or the OCI Console to create this. At a minimum, you need the following:

  1. A VCN
  2. A subnet with DNS label to host the private IP endpoint
  3. An Network Security Group (NSG) to secure the private IP endpoint and hence the database. It must allow allow ingress from the OKE workload cluster VCN
  4. A Dynamic Routing Gateway (DRG) and a Remote Peering Connection (RPC) to peer with Workload OKE Clusters’ VCNs

Creating OCI Vault and Secret

In the earlier article, we stored the Database Admin and wallet password in a Kubernetes Secret. We want to enhance the security of this setup so instead, we’ll use OCI Vault and Secrets to store these passwords.

First, follow the OCI instructions to create a Secret in Vault. We’ll then create 2 secrets:

  1. 1 for the database admin user password
  2. 1 for the wallet password

Finally, edit the policy created previously and add the following policy statement:

Allow dynamic-group oracle-operator to manage secrets in compartment <replaceme>

At this point, all we have is a management cluster with Cluster API, Argo CD and the Helm Add on installed and a minimal VCN with a database subnet:

We also need a helm chart for ora-operator published either in OCI Object Storage or GitHub. Follow this article to set it up.

In the next installment, we’ll start flexing our GitOps muscles and deploy various workloads.

--

--