Development Guide
Local Development
Using Minikube
For installation guide please refer to official minikube documentation.
It’s recommended to use rather generous memory settings to avoid unnecessary hiccups of the platform. For example minikube start --cpus=4 --memory=16g should be sufficient for most of the cases.
|
Additional configuration steps
-
Installing Istio
By simply using
istioctl installyou can roll out default istio deployment which is sufficient for development practices. Check how to installistioctlcommand line tool on your OS in the official docs. -
Installing Operator Lifecycle Management
operator-sdk olm install --version v0.18.3will install Operator Lifecycle Management in your cluster. -
Enabling the tunnel
For ease of use you could also enable the tunneling. This will create a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP. Simply invoke
minikube tunnelin another terminal window (or run it in the background).
Running end-to-end tests
To run the tests against local minikube instance you have to set few environment variables which you can pass as ENV_FILE and execute:
First, create minikube.env on the fly, based on dynamically assigned values such as IP addresses.
cat <<EOF > minikube.env
IKE_E2E_MANAGE_CLUSTER=false
ISTIO_NS=istio-system
IKE_IMAGE_TAG=latest
TELEPRESENCE_VERSION=0.109
IKE_CLUSTER_HOST=$(minikube ip)
IKE_ISTIO_INGRESS=http://$(k get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.clusterIP}')/
IKE_INTERNAL_CONTAINER_REGISTRY=quay.io
IKE_EXTERNAL_CONTAINER_REGISTRY=quay.io
IKE_CONTAINER_REPOSITORY=maistra-dev
PRE_BUILT_IMAGES=true
EOF
Setting PRE_BUILT_IMAGES=true will result in pulling required images from quay.io/maistra-dev. If you would like to use internal/local registry refer to microk8s e2e testes example.
|
ENV_FILE=minikube.env make test-e2e
Using MicroK8s
MicroK8s is a lightweight, upstream Kubernetes distribution which you can run on your machine to develop and test changes.
Check official docs to see how you can install it on your OS.
Needed customizations
-
Enable following services:
microk8s.enable dns registry istio -
Point
kubectltomicrok8sinstance, for example:sudo microk8s.kubectl config view --raw > /tmp/kubeconfig export KUBECONFIG=/tmp/kubeconfig
Running end-to-end tests
To run the tests against local microk8s instance you have to set few environment variables which you can pass as ENV_FILE and execute:
ENV_FILE=microk8s.env make test-e2e
IKE_E2E_MANAGE_CLUSTER=false
ISTIO_NS=istio-system
IKE_IMAGE_TAG=latest
TELEPRESENCE_VERSION=0.105
IKE_CLUSTER_HOST=localhost
IKE_ISTIO_INGRESS=http://localhost:31380
IKE_INTERNAL_CONTAINER_REGISTRY=localhost:32000
IKE_EXTERNAL_CONTAINER_REGISTRY=localhost:32000
In this case executing make test-e2e will result in building all the required images and pushing them to internal registry (see values of IKE_INTERNAL_CONTAINER_REGISTRY and IKE_EXTERNAL_CONTAINER_REGISTRY).
|
Release automation driven by Pull Request
By creating a Pull Request with release notes, we can automate the release process simply by using commands in the comments. You can see actual example here.
Creating release branch
Running make draft-release-notes VERSION=v0.1.0 creates new release notes file and initial commit titled release: highlights of v0.1.0. This commit will also become a title of the Pull Request. If there are noteworthy highlights you can write a few paragraphs in the created file docs/modules/ROOT/pages/release_notes/v0.1.0.adoc.
Changelog generation using /changelog command
An owner, committer, or a member of our organization can use /changelog command to trigger changelog generation for the v0.1.0 version (which is inferred from PR title).
Such a comment results in adding commits to created PR which consists of:
-
changelog based on all PRs since the last release, which will be appended to release highlights submitted as part of this PR.
Changelog generation job performs validation and will fail if one of the issues listed below occurs:
-
versionin the title does not conform with semantic versioning -
versionhas been already released -
release notes do not exist (submitting this file is the only thing needed for this PR)
-
any of the PRs created since the last release have no labels and thus cannot be categorized by
In all the cases above PR will have release / changelog status set to failure and comment with an appropriate error message will be added
by the bot. You can see that in the comments of the sample PR.
Preparing the release using /release command
This command will squash all previous commits to release: highlights of v0.1.0 for streamlined history.
Next it will create the following commits:
-
"version commit" (e.g.
release: v0.1.0) which consist of documentation version lock tov0.1.0and special/tagdirective in the message. This directive later used to create actual tag when PR is rebased ontomasterbranch. -
commit which reverts documentation version lock back to
latest.
Triggering release process by invoking /shipit
Once both steps above succeeds, we can trigger the actual release process. This can be done by commenting with /shipit.
This will result in rebasing this PR on top of the target branch if all the required checks have been successful. Once "release commit" appears
on the target branch it will be automatically tagged based on /tag VERSION comment in its message. That tag will trigger the
actual release process which consists of:
-
building and pushing tagged container images to
quay.ioregistry -
opening Pull Request with new operator version in Operator Hub
-
opening Pull Request with new version of Tekton tasks.
-
pushing cross-compiled binaries and release notes to GitHub
-
generating documentation for released version
Diagram below describes the entire process and its artifacts.