Development Guide

Local Development

Using Minikube

For installation guide please refer to official minikube documentation.

It’s recommended to use rather generous memory settings to avoid unnecessary hiccups of the platform. For example minikube start --cpus=4 --memory=16g should be sufficient for most of the cases.

Additional configuration steps

  1. Installing Istio

    By simply using istioctl install you can roll out default istio deployment which is sufficient for development practices. Check how to install istioctl command line tool on your OS in the official docs.

  2. Installing Operator Lifecycle Management in the cluster

    ./bin/operator-sdk olm install --version v0.21.2
    shell
  3. Enabling the tunnel

    For ease of use you could also enable the tunneling. This will create a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP. Simply invoke minikube tunnel in another terminal window (or run it in the background).

Running end-to-end tests

To run the tests against local minikube instance you have to set few environment variables which you can pass as ENV_FILE and execute:

First, create minikube.env. As each time some variables can differ (such as IP of the cluster), it’s good to have them evaluated on the fly. You can use following snippet to create .env file.

minikube.env
cat <<EOF > minikube.env
IKE_E2E_MANAGE_CLUSTER=false
ISTIO_NS=istio-system
IKE_IMAGE_TAG=latest
TELEPRESENCE_VERSION=0.109
IKE_CLUSTER_HOST=$(minikube ip)
IKE_ISTIO_INGRESS=http://$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.clusterIP}')/
IKE_INTERNAL_CONTAINER_REGISTRY=quay.io
IKE_EXTERNAL_CONTAINER_REGISTRY=quay.io
IKE_CONTAINER_REPOSITORY=maistra-dev
PRE_BUILT_IMAGES=true
EOF
bash
Setting PRE_BUILT_IMAGES=true will result in pulling required images from quay.io/maistra-dev. If you would like to use internal/local registry refer to microk8s e2e tests example.

With the created .env file you can now launch end-to-end tests passing following variable:

ENV_FILE=minikube.env make test-e2e
bash

Using MicroK8s

MicroK8s is a lightweight, upstream Kubernetes distribution which you can run on your machine to develop and test changes.

Check official docs to see how you can install it on your OS.

  1. Here’s how we install it in our CircleCI setup:

    sudo snap ack core.assert
    sudo snap install core.snap
    sudo snap ack microk8s.assert
    sudo snap install microk8s.snap --classic
    shell

Needed customizations

  1. Enable following services:

    sudo microk8s.enable dns registry istio rbac
    shell
  2. Point kubectl to microk8s instance, for example:

    sudo microk8s.kubectl config view --raw > /tmp/kubeconfig
    export KUBECONFIG=/tmp/kubeconfig
    shell
You might end up with Istio unable reach outside networks. See this thread and the solution specific for Fedora.

Running end-to-end tests

To run the tests against local microk8s instance you have to set few environment variables which you can pass as ENV_FILE and execute:

ENV_FILE=microk8s.env make test-e2e
bash
microk8s.env
IKE_E2E_MANAGE_CLUSTER=false
ISTIO_NS=istio-system
IKE_IMAGE_TAG=latest
TELEPRESENCE_VERSION=0.109
IKE_CLUSTER_HOST=localhost
IKE_ISTIO_INGRESS=http://localhost:31380
IKE_INTERNAL_CONTAINER_REGISTRY=localhost:32000
IKE_EXTERNAL_CONTAINER_REGISTRY=localhost:32000
.env
In this case executing make test-e2e will result in building all the required images and pushing them to internal registry (see values of IKE_INTERNAL_CONTAINER_REGISTRY and IKE_EXTERNAL_CONTAINER_REGISTRY).

Release

Release automation driven by Pull Request

By creating a Pull Request with release notes, we can automate the release process simply by using commands in the comments. You can see actual example here.

Creating release branch

Running make draft-release-notes VERSION=v0.1.0 creates new release notes file and initial commit titled release: highlights of v0.1.0. This commit will also become a title of the Pull Request. If there are noteworthy highlights you can write a few paragraphs in the created file docs/modules/ROOT/pages/release_notes/v0.1.0.adoc.

Changelog generation using /changelog command

An owner, committer, or a member of our organization can use /changelog command to trigger changelog generation for the v0.1.0 version (which is inferred from PR title).

Such a comment results in adding commits to created PR which consists of:

  • changelog based on all PRs since the last release, which will be appended to release highlights submitted as part of this PR.

Changelog generation job performs validation and will fail if one of the issues listed below occurs:

  • version in the title does not conform with semantic versioning

  • version has been already released

  • release notes do not exist (submitting this file is the only thing needed for this PR)

  • any of the PRs created since the last release have no labels and thus cannot be categorized by

In all the cases above PR will have release / changelog status set to failure and comment with an appropriate error message will be added by the bot. You can see that in the comments of the sample PR.

Preparing the release using /release command

This command will squash all previous commits to release: highlights of v0.1.0 for streamlined history.

Next it will create the following commits:

  • "version commit" (e.g. release: v0.1.0) which consist of documentation version lock to v0.1.0 and special /tag directive in the message. This directive later used to create actual tag when PR is rebased onto master branch.

  • commit which reverts documentation version lock back to latest.

Triggering release process by invoking /shipit

Once both steps above succeeds, we can trigger the actual release process. This can be done by commenting with /shipit.

This will result in rebasing this PR on top of the target branch if all the required checks have been successful. Once "release commit" appears on the target branch it will be automatically tagged based on /tag VERSION comment in its message. That tag will trigger the actual release process which consists of:

  1. building and pushing tagged container images to quay.io registry

  2. opening Pull Request with new operator version in Operator Hub

  3. opening Pull Request with new version of Tekton tasks.

  4. pushing cross-compiled binaries and release notes to GitHub

  5. generating documentation for released version

Diagram below describes the entire process and its artifacts.

Release automation
Figure 1. Release automation