Running Continuous Integration on OKE with Tekton

Now that terraform-oci-oke 3.0 has been released, I want to explore running a “cloud native” CI on OKE. My criteria are relatively simple:

  1. Able to build and test applications using a number of tools e.g. maven/gradle/npm
  2. Able to build containers and push them to a secure registry
  3. Eventually use Infrastructure As Code for CD

I settled on Tekton. This is what the workflow will look like:

CI Workflow

Given that Tekton runs on a Kubernetes cluster, I’ve provisioned one using terraform-oci-oke. One thing I’ve done is enable the use of both public and private load balancers and set the preferred load balancer type to be private. The reason is eventually I want to deploy the APIs through the OCI API Gateway but more on that in future posts. To do the above, I set the following in my variable file and then run terraform apply:

lb_subnet_type = "both"
preferred_lb_subnets = "private"

This will create the cluster and choose the private load balancer subnets as the preferred subnets for OKE.

In this post, I will focus on running CI with Tekton.

Follow the Tekton installation guide and run the following from the operator host:

kubectl apply --filename

Next, use the CSI Volume plugin to create a persistent volume by adding the following in a file called pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
name: tektonclaim
namespace: tekton-pipelines
storageClassName: "oci-bv"
- ReadWriteOnce
storage: 50Gi

Now, we want the PVC to be bound now instead of waiting to run the tasks so let’s create a dummy pod just to use it the PVC by adding the following to dummypod.yaml:

apiVersion: v1                                                                                                                                                                                
kind: Pod
name: nginx
namespace: tekton-pipelines
- name: nginx
image: nginx:latest
- name: http
containerPort: 80
- name: data
mountPath: /usr/share/nginx/html
- name: data
claimName: tektonclaim

Create the pod:

kubectl create -f dummypod.yaml

Check on the status of the PVC and the PV and ensure they report as bound:

$ k -n tekton-pipelines get pvc                                                                                                                                               
tektonclaim Bound csi-dea15685-2fa9-4496-be86-f0c22da6e65f 50Gi RWO oci-bv 8m18s
$ k get pv
csi-dea15685-2fa9-4496-be86-f0c22da6e65f 50Gi RWO Delete Bound tekton-pipelines/tektonclaim oci-bv 6m25s

Delete the pod:

kubectl delete -f dummypod.yaml

Let’s now install the tekton-cli:

sudo rpm -Uvh

And the Tekton dashboard:

kubectl apply --filename

Now that we’ve finished installing Tekton, we can check if it’s working with the Hello World example on the Tekton website. Create a file task-hello.yaml:

kind: Task
name: hello
- name: hello
image: ubuntu
- echo
- "Hello World!"

Then create the task:

kubectl create -f task-hello.yaml

and start the task:

tkn task start hello

Finally, let’s see if the task has successfully run:

tkn taskrun logs --last -f[hello] Hello World!

We’ve successfully installed Tekton in OKE.

We’ll use Spring Boot as the application framework. Use the Spring Initializr or command line to generate your project. We’ll use Spring Initializr here and ensure we add the Spring Web dependency:

Generating application code with Spring Initializr

Click on “Generate” to download the zip file and extract it locally.

Create a new repo in GitHub and let’s push the helloworld app to git:

cd /path/to/helloworld
git init
git add .
git commit -m "first commit"
git branch -m master main
git remote add origin
git push -u origin main

We also want to be able to build a container image so we’ll add a Dockerfile:

Let’s first create a Dockerfile and add it to the repo:

FROM target/*.jar /java -jar  /helloworld-0.0.1-SNAPSHOT.jar

We can naturally update the version before building but let’s not get ahead of ourselves.

Now that our application code is on GitHub, what we want to do is get Tekton to do the following:

  1. clone the helloworld application repo
  2. build the application code. In the helloworld application, we are using maven so we want to be able to build using maven
  3. build the container image using a Dockerfile
  4. push the built container image to OCIR.

OCIR is private by default and we want to keep it that way. In order to be able to push an image to OCIR, we need an authentication token that is stored in a Kubernetes secret so that Tekton can use it to authenticate itself to OCIR.

Fortunately, the terraform-oci-oke can create this secret for us. All you need to do is create the authentication token, store it in an OCI Vault secret and then provide the vault secret’s id to your variable file. You can follow the instructions here.

secret_id = "ocid1.vaultsecret.oc1.ap-sydney-1…"

We also need a Kubernetes ServiceAccount and again we can get this created for us by setting this in the terraform variable file:

create_service_account = true
service_account_name = "tekton"
service_account_namespace = "tekton-pipelines"
service_account_cluster_role_binding = "tekton-crb"

Since you likely have not done those, you’ll need to run terraform apply again to have the secret and the ServiceAccount created. After you run terraform apply again, check that the secret and the ServiceAccount have been created from the operator host:

$ kubectl get secrets                                                                                                                                       
default-token-qk98l 3 20h
ocirsecret 1 20h
$ kubectl get serviceaccount -n tekton-pipelines
default 1 20h
tekton 1 49s

At this point, we need to configure the ServiceAccount to use the ocirsecret so that Tekton can authenticate itself and push images to OCIR. Now, the issue is that the ocirsecret is created in the default namespace by default, so we first need to copy it into the tekton-pipelines namespace. We have an enhancement on the terraform-oci-oke project that will allow you to specify a list, type and preferred namespaces. If you would like to contribute, send us a PR. But for now:

$ kubectl get secret ocirsecret --namespace=default --export -o yaml |kubectl apply --namespace=tekton-pipelines -f -$ kubectl edit sa tekton

and under secrets, add the secret name:

apiVersion: v1                                                                                                                                                        
kind: ServiceAccount
name: tekton
namespace: tekton-pipelines
- name: tekton-token-jwmbn
- name: ocirsecret

Similarly, if you want to use the Java SE image from Oracle Container Registry (not to be confused with OCIR) instead of building a new one, you can create a second secret to access the Oracle Container Registry.

kubectl create secret docker-registry oracle-container-registry \ 
-n tekton-pipelines \
— \
— docker-username=$CONTAINER_REGISTRY_USER \

Replace $CONTAINER_REGISTRY_ user and password with your Oracle SSO account. Once the secret is created, add it to your ServiceAccount as we did for OCIR.

Another thing is that for the purpose of authenticating itself to a git or a container registry or in case of having to use multiple repos and therefore having to choose which credentials, tekton needs some annotations. You can read about it here.

But essentially, we need to add an annotation in our ocirsecret:

apiVersion: v1                                                                                                                                                                                
kind: Secret

This will indicate to Tekton that when accessing the Sydney OCIR located at, this credential applies. Otherwise, Tekton will ignore the secret.

In order to build the application, we need to do the following:

  1. build the application code using maven
  2. build the container image using a Dockerfile
  3. push the built container image to OCIR.

We could have used our own maven containers to build this but Tekton has this delightful catalog of external tasks that you can install and one of them is maven. You can view these tasks on GitHub or Tekton Hub which also gives you the command to install the tasks you need.

So, let’s install the following external tasks:

  • git-clone
  • maven
  • buildah

We’ll use git-clone since we only need cloning for now and Maven since we used Maven when creating the Sprint Boot application and finally, we’ll use buildah because my good mate Avi Miller is a fan.

kubectl apply -f -n tekton-pipelineskubectl apply -f -n tekton-pipelineskubectl apply -f -n tekton-pipelines

So, what we want to do is the following:

Each of the above can be a Task in a Pipeline. To understand the difference, you can check the Tekton concepts. So let’s create a Pipeline and a PipelineRun then:

The pipeline consists of 3 tasks:

  • fetch-repo of type git-clone
  • maven-run of type maven
  • build-image of type buildah

You’ll also notice that we ordered them with runAfter:

# in maven-run
- fetch-repo
# in build-image
- maven-run

This way, we get the desired order of tasks in the pipeline. Also, notice the workspace references and their use of PVC as well as the use of ServiceAccount and object_storage_namespace in the PipelineRun:

kind: PipelineRun
name: helloworld-pipeline-run
serviceAccountName: tekton
name: helloworld-build
- name: maven-settings
emptyDir: {}
- name: tekton-workspace
claimName: tektonclaim
- name: repo-url
- name: branch-name
value: main
- name: image-name

The object storage namespace is the one for your tenancy. You need this because this is where OCIR stores your container images.

Create a yaml file and add both the Pipeline and PipelineRun in the file and apply the manifest:

kubectl apply -f pipeline.yaml

Let’s access the dashboard:

kubectl -n tekton-pipelines get pods | grep tekton-dashboard                                                                                                                      
tekton-dashboard-5675959458-s46sm 1/1 Running 0 15h
kubectl -n tekton-pipelines port-forward tekton-dashboard-5675959458-s46sm 9097:9097

We can now access the dashboard using our browser:

List of PipelineRun

When we select a PipelineRun, we can see the following:

On the left, we can see the 3 tasks and the 2 steps of the selected task defined in this Pipeline. The screenshot shows the maven run including downloading the various dependencies. After the task is completed, you will also see the container image in your OCIR.

However, there’s a problem with the above. Every time a build is done, maven will download the dependencies over and over again (aka Download the Internet) and in this experiment, the build took on average about 4 mins 16 seconds. The screenshot below shows the timing results for 5 builds.

Timing results for 5 builds

Of these, the mvn-goals step of the maven-run task takes an average of 2 mins and 42s., most of which is spent in downloading the dependencies. Also bear in mind that this is a basic Spring Boot project with no additional dependencies. When you start adding more dependencies, this will quickly become unacceptable, especially if you have multiple teams building.

We need to reduce the application build time and we can achieve this by getting maven to create and use a local cache and avoid the repeated download of dependencies. Further, we need to to share this between different pipelines, tasks etc. This blog post offers a possible approach. However, it’s also defining its own maven task instead of using the available one from the catalog and also PipelineResources (I think) which may become deprecated. Nevertheless, the post shows the possibility of using a PVC as workspace for a local maven repo.

Instead, let’s work with the idea of using a separate workspace backed by a PersistentVolumeClaim that will keep the local maven repo but we will adapt the existing maven Task from the catalog to it. Here, since I’m experimenting with only 1 worker node in my cluster, I’ll use the Block Volume as storage. However, you can also use the OCI File System Service so the ‘local maven repo’ is accessible to all the nodes. Let’s define another PVC for the local maven repo.

apiVersion: v1                                                                                                                                                                                
kind: PersistentVolumeClaim
name: mavenclaim
namespace: tekton-pipelines
storageClassName: "oci-bv"
- ReadWriteOnce
storage: 50Gi

Now, the problem is that the Maven task from the Tekton catalog expects only 2 workspaces: source and maven-settings. So, in order to add the maven repo as a workspace, we’ll change the Task definition:

curl -o maven2.yaml

And we will edit it and add a 3rd workspace and change the name of the task to a rather unimaginative ‘maven2’ as well:

kind: Task
name: maven2
- name: source
description: The workspace consisting of maven project.
- name: maven-settings
description: >-
The workspace consisting of the custom maven settings
provided by the user.
- name: maven-repo
description: The workspace to be used as local repo.

And create the new maven task:

kubectl apply -f maven2.yaml -n tekton-pipelines

By default, you can use only 1 PVC per task but we can get around this by editing the configmap feature-flags:

kubectl -n tekton-pipelines edit configmap feature-flags

and disabling the affinity assistant:

disable-affinity-assistant: "true"

We can now adjust our pipeline to add the maven repo as a workspace:

We need to change the taskRef from ‘maven’ to ‘maven2 ’and also add the following parameter :

- -Dmaven.repo.local=$(workspaces.maven-repo.path)
- -DskipTests
- clean
- package

This will force maven to use the workspace as the location of its local repo.

We can now expect the first build to be as long as the previous but the subsequent builds should be much faster. Let’s run this about the same number of times to get an average:

We can now see a significant improvement. The overall average build time (clone, maven build, app build) has now been reduced to around 2 mins 27s. This represents an improvement in build time of 42%. If you examine the actual time taken for the mvn-goals step, this improvement is even more significant and down to 9s only which is an improvement of nearly 95%.

With hindsight, I should have named the task flashmaven or something superheroic.

I hope you find this post useful.


  1. Tekton documentation
  2. Tekton tutorial
  3. Using OCI File Storage with OKE
  4. Creating PersistentVolumes with OKE
  5. Speed up maven builds in Tekton pipelines
  6. Authentication with Tekton