IK.AM

@making's tech note


Install Tanzu Application Platform 1.3 on AKS with Multi Cluster Topology

🗃 {Dev/CaaS/Kubernetes/TAP/MultiCluster}
🏷 Kubernetes 🏷 Cartographer 🏷 AKS 🏷 Tanzu 🏷 TAP 🏷 Knative 🏷 Azure 🏷 Grype 
🗓 Updated at 2022-12-16T08:08:17Z  🗓 Created at 2022-12-16T02:18:39Z   🇯🇵 Original entry

⚠️ The content of this article is not supported by VMware. Any issues arising from the content of this article are your responsibility and please do not contact VMware Support.

Install Tanzu Application Platform 1.3 on AKS with Multi Cluster Topology .

Also enable HTTPS with a self-signed certificate.

Table of contents

Required CLIs

It is assumed that the following CLIs have already been installed.

Install Pivnet CLI

Here we use pivnet CLI to download the required software. The pivnet CLI can be installed with brew.

brew install pivotal/tap/pivnet-cli

Obtain a VMware Tanzu Network and log in with the pivnet CLI.

pivnet login --api-token=<API Token>

Accept EULAs

If you are installing for the first time, please accept the EULA for the following components.

⚠️ The trial period in the EULA is 30 days. However, there are no particular software restrictions.

Install Tanzu CLI

For Mac
pivnet download-product-files --product-slug='tanzu-application-platform' --release-version='1.3.3' --glob='tanzu-framework-darwin-amd64-*.tar'
# For Linux
pivnet download-product-files --product-slug='tanzu-application-platform' --release-version='1.3.3' --glob='tanzu-framework-linux-amd64-*.tar'
# For Windows
pivnet download-product-files --product-slug='tanzu-application-platform' --release-version='1.3.3' --glob='tanzu-framework-windows-amd64-*.zip'
tar xvf tanzu-framework-*-amd64-*.tar
install cli/core/v0.25.0/tanzu-core-*_amd64 /usr/local/bin/tanzu
export TANZU_CLI_NO_INIT=true
$ tanzu version
version: v0.25.0
buildDate: 2022-08-25
sha: 6288c751-dirty

Install plugin

tanzu plugin install --local cli all
rm -f tanzu-framework-*-amd64-*.tar

Create resource groups

Here, create a resource group for each cluster. Also, an ACR instance is created in in a common resource group and shared from each cluster

az group create --name tap-common --location japaneast
az group create --name tap-view --location japaneast
az group create --name tap-build --location japaneast
az group create --name tap-run --location japaneast

Create an ACR instance

ACR_NAME=tap${RANDOM}
az acr create --resource-group tap-common \
  --location japaneast \
  --name ${ACR_NAME} --sku standard

Create Read Only Service Principal

ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
SERVICE_PRINCIPAL_RO_NAME=tap-ro
SERVICE_PRINCIPAL_RO_PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_RO_NAME --scopes $ACR_REGISTRY_ID --years 100 --role acrpull --query password --output tsv)
SERVICE_PRINCIPAL_RO_USERNAME=$(az ad sp list --display-name $SERVICE_PRINCIPAL_RO_NAME --query "[].appId" --output tsv)

Create a Read Write Service Principal

ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
SERVICE_PRINCIPAL_RW_NAME=tap-rw
SERVICE_PRINCIPAL_RW_PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_RW_NAME --scopes $ACR_REGISTRY_ID --years 100 --role acrpush --query password --output tsv)
SERVICE_PRINCIPAL_RW_USERNAME=$(az ad sp list --display-name $SERVICE_PRINCIPAL_RW_NAME --query "[].appId" --output tsv)

Check login with Read Only Service Principal

docker login ${ACR_NAME}.azurecr.io -u ${SERVICE_PRINCIPAL_RO_USERNAME} -p ${SERVICE_PRINCIPAL_RO_PASSWORD}

Check login with Read Write Service Principal

docker login ${ACR_NAME}.azurecr.io -u ${SERVICE_PRINCIPAL_RW_USERNAME} -p ${SERVICE_PRINCIPAL_RW_PASSWORD}

Save the environment variables to a script.

cat <<EOF > env.sh
export ACR_NAME=${ACR_NAME}
export SERVICE_PRINCIPAL_RO_NAME=${SERVICE_PRINCIPAL_RO_NAME}
export SERVICE_PRINCIPAL_RO_USERNAME=${SERVICE_PRINCIPAL_RO_USERNAME}
export SERVICE_PRINCIPAL_RO_PASSWORD=${SERVICE_PRINCIPAL_RO_PASSWORD}
export SERVICE_PRINCIPAL_RW_NAME=${SERVICE_PRINCIPAL_RW_NAME}
export SERVICE_PRINCIPAL_RW_USERNAME=${SERVICE_PRINCIPAL_RW_USERNAME}
export SERVICE_PRINCIPAL_RW_PASSWORD=${SERVICE_PRINCIPAL_RW_PASSWORD}
EOF

Image Relocations

Log in to registry.tanzu.vmware.com with your Tanzu Network account

TANZUNET_USERNAME=...
TANZUNET_PASSWORD=...

docker login registry.tanzu.vmware.com -u ${TANZUNET_USERNAME} -p ${TANZUNET_PASSWORD}

Reloacate Cluster Essentials

imgpkg copy \
  -b registry.tanzu.vmware.com/tanzu-cluster-essentials/cluster-essentials-bundle:1.3.0 \
  --to-repo ${ACR_NAME}.azurecr.io/tanzu-cluster-essentials/cluster-essentials-bundle \
  --include-non-distributable-layers

Relocate TAP packages

It takes time because the size is large (8GB or more)

imgpkg copy \
  -b registry.tanzu.vmware.com/tanzu-application-platform/tap-packages:1.3.3 \
  --to-repo ${ACR_NAME}.azurecr.io/tanzu-application-platform/tap-packages \
  --include-non-distributable-layers

Relocate TBS full dependencies

It also takes time because the size is large (9GB or more)

imgpkg copy \
  -b registry.tanzu.vmware.com/tanzu-application-platform/full-tbs-deps-package-repo:1.7.4 \
  --to-repo ${ACR_NAME}.azurecr.io/tanzu-application-platform/full-tbs-deps-package-repo \
  --include-non-distributable-layers

Create AKS clusters

Use standard_f4s_v2 (4 vCPU, 8GB Memory) Worker Nodes for the AKS cluster and leave the cluster-autoscaler enabled. Also enable integration with Azure AD.

for View Cluster

az aks create \
  --resource-group tap-view \
  --location japaneast \
  --name tap-view \
  --node-count 1 \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 10 \
  --node-vm-size standard_f4s_v2 \
  --load-balancer-sku standard \
  --kubernetes-version 1.24 \
  --generate-ssh-keys \
  --enable-aad

Grant authority to the AKS cluster to statically set the IP to Contour's Envoy later. I referred to the following documents.
https://docs.microsoft.com/en-us/azure/aks/static-ip#create-a-service-using-the-static-ip-address

RG_ID=$(az group show --name tap-view -o tsv --query id )
SP_APP_ID=$(az aks show --name tap-view --resource-group tap-view --query "identity.principalId" -o tsv)
az role assignment create --assignee-object-id ${SP_APP_ID} --assignee-principal-type "ServicePrincipal" --role "Network Contributor" --scope ${RG_ID}

for Build Cluster

az aks create \
  --resource-group tap-build \
  --location japaneast \
  --name tap-build \
  --node-count 1 \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 10 \
  --node-vm-size standard_f4s_v2 \
  --load-balancer-sku standard \
  --kubernetes-version 1.24 \
  --generate-ssh-keys \
  --enable-aad

for Run Cluster

az aks create \
  --resource-group tap-run \
  --location japaneast \
  --name tap-run \
  --node-count 1 \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 10 \
  --node-vm-size standard_f4s_v2 \
  --load-balancer-sku standard \
  --kubernetes-version 1.24 \
  --generate-ssh-keys \
  --enable-aad

Grant authority to the AKS cluster to statically set the IP to Contour's Envoy later. I referred to the following documents.
https://docs.microsoft.com/en-us/azure/aks/static-ip#create-a-service-using-the-static-ip-address

RG_ID=$(az group show --name tap-run -o tsv --query id )
SP_APP_ID=$(az aks show --name tap-run --resource-group tap-run --query "identity.principalId" -o tsv)
az role assignment create --assignee-object-id ${SP_APP_ID} --assignee-principal-type "ServicePrincipal" --role "Network Contributor" --scope ${RG_ID}

Check the clusters

$ az aks list -o table
Name       Location    ResourceGroup    KubernetesVersion    CurrentKubernetesVersion    ProvisioningState    Fqdn
---------  ----------  ---------------  -------------------  --------------------------  -------------------  -----------------------------------------------------------
tap-build  japaneast   tap-build        1.24                 1.24.6                      Succeeded            tap-build-tap-build-85cd83-7ed8ba35.hcp.japaneast.azmk8s.io
tap-run    japaneast   tap-run          1.24                 1.24.6                      Succeeded            tap-run-tap-run-85cd83-bc4d3bda.hcp.japaneast.azmk8s.io
tap-view   japaneast   tap-view         1.24                 1.24.6                      Succeeded            tap-view-tap-view-85cd83-27be84e1.hcp.japaneast.azmk8s.io

Install Cluster Essentials for VMware Tanzu

Install Cluster Essentials for VMware Tanzu to deploy the Kapp Controller and Secretgen Controller required for TAP installation .

# Mac
pivnet download-product-files --product-slug='tanzu-cluster-essentials' --release-version='1.3.0' --glob='tanzu-cluster-essentials-darwin-amd64-*'
# Linux
pivnet download-product-files --product-slug='tanzu-cluster-essentials' --release-version='1.3.0' --glob='tanzu-cluster-essentials-linux-amd64-*'
mkdir tanzu-cluster-essentials
tar xzvf tanzu-cluster-essentials-*-amd64-*.tgz -C tanzu-cluster-essentials

export INSTALL_BUNDLE=${ACR_NAME}.azurecr.io/tanzu-cluster-essentials/cluster-essentials-bundle:1.3.0
export INSTALL_REGISTRY_HOSTNAME=${ACR_NAME}.azurecr.io
export INSTALL_REGISTRY_USERNAME=${SERVICE_PRINCIPAL_RO_USERNAME}
export INSTALL_REGISTRY_PASSWORD=${SERVICE_PRINCIPAL_RO_PASSWORD}
cd tanzu-cluster-essentials


# For View Cluser
az aks get-credentials --resource-group tap-view --name tap-view --admin --overwrite-existing
./install.sh --yes

# For Build Cluster
az aks get-credentials --resource-group tap-build --name tap-build --admin --overwrite-existing
./install.sh --yes

# For Run Cluster
az aks get-credentials --resource-group tap-run --name tap-run --admin --overwrite-existing
./install.sh --yes

cd ..

rm -f tanzu-cluster-essentials-*-amd64-*.tgz

Generate a CA (self-signed) certificate

mkdir -p certs
rm -f certs/*
docker run --rm -v ${PWD}/certs:/certs hitch openssl req -new -nodes -out /certs/ca.csr -keyout /certs/ca.key -subj "/CN=default-ca/O=TAP/C=JP"
chmod og-rwx ca.key
docker run --rm -v ${PWD}/certs:/certs hitch openssl x509 -req -in /certs/ca.csr -days 3650 -extfile /etc/ssl/openssl.cnf -extensions v3_ca -signkey /certs/ca.key -out /certs/ca.crt

Create Service Account for TAP GUI in Build and Run Cluster

https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.3/tap/GUID-tap-gui-cluster-view-setup.html

mkdir -p tap-gui
cat <<EOF > tap-gui/tap-gui-viewer-service-account-rbac.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: tap-gui
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: tap-gui
  name: tap-gui-viewer
automountServiceAccountToken: false
---
apiVersion: v1
kind: Secret
metadata:
  name: tap-gui-viewer
  namespace: tap-gui
  annotations:
    kubernetes.io/service-account.name: tap-gui-viewer
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tap-gui-read-k8s
subjects:
- kind: ServiceAccount
  namespace: tap-gui
  name: tap-gui-viewer
roleRef:
  kind: ClusterRole
  name: k8s-reader
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: k8s-reader
rules:
- apiGroups: ['']
  resources: ['pods', 'pods/log', 'services', 'configmaps']
  verbs: ['get', 'watch', 'list']
- apiGroups: ['apps']
  resources: ['deployments', 'replicasets']
  verbs: ['get', 'watch', 'list']
- apiGroups: ['autoscaling']
  resources: ['horizontalpodautoscalers']
  verbs: ['get', 'watch', 'list']
- apiGroups: ['networking.k8s.io']
  resources: ['ingresses']
  verbs: ['get', 'watch', 'list']
- apiGroups: ['networking.internal.knative.dev']
  resources: ['serverlessservices']
  verbs: ['get', 'watch', 'list']
- apiGroups: [ 'autoscaling.internal.knative.dev' ]
  resources: [ 'podautoscalers' ]
  verbs: [ 'get', 'watch', 'list' ]
- apiGroups: ['serving.knative.dev']
  resources:
  - configurations
  - revisions
  - routes
  - services
  verbs: ['get', 'watch', 'list']
- apiGroups: ['carto.run']
  resources:
  - clusterconfigtemplates
  - clusterdeliveries
  - clusterdeploymenttemplates
  - clusterimagetemplates
  - clusterruntemplates
  - clustersourcetemplates
  - clustersupplychains
  - clustertemplates
  - deliverables
  - runnables
  - workloads
  verbs: ['get', 'watch', 'list']
- apiGroups: ['source.toolkit.fluxcd.io']
  resources:
  - gitrepositories
  verbs: ['get', 'watch', 'list']
- apiGroups: ['source.apps.tanzu.vmware.com']
  resources:
  - imagerepositories
  - mavenartifacts
  verbs: ['get', 'watch', 'list']
- apiGroups: ['conventions.carto.run']
  resources:
  - podintents
  verbs: ['get', 'watch', 'list']
- apiGroups: ['kpack.io']
  resources:
  - images
  - builds
  verbs: ['get', 'watch', 'list']
- apiGroups: ['scanning.apps.tanzu.vmware.com']
  resources:
  - sourcescans
  - imagescans
  - scanpolicies
  verbs: ['get', 'watch', 'list']
- apiGroups: ['tekton.dev']
  resources:
  - taskruns
  - pipelineruns
  verbs: ['get', 'watch', 'list']
- apiGroups: ['kappctrl.k14s.io']
  resources:
  - apps
  verbs: ['get', 'watch', 'list']
EOF
kubectl apply -f tap-gui/tap-gui-viewer-service-account-rbac.yaml --context tap-build-admin
kubectl apply -f tap-gui/tap-gui-viewer-service-account-rbac.yaml --context tap-run-admin

Retrieve server urls, CA certs and the genereated tokens

kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}' --context tap-build-admin > tap-gui/cluster-url-build
kubectl -n tap-gui get secret tap-gui-viewer --context tap-build-admin -otemplate='{{index .data "token" | base64decode}}' > tap-gui/cluster-token-build
kubectl -n tap-gui get secret tap-gui-viewer --context tap-build-admin -otemplate='{{index .data "ca.crt"}}'  > tap-gui/cluster-ca-build

kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}' --context tap-run-admin > tap-gui/cluster-url-run
kubectl -n tap-gui get secret tap-gui-viewer --context tap-run-admin -otemplate='{{index .data "token" | base64decode}}' > tap-gui/cluster-token-run
kubectl -n tap-gui get secret tap-gui-viewer --context tap-run-admin -otemplate='{{index .data "ca.crt"}}'  > tap-gui/cluster-ca-run

Install Tanzu Application Platform (View Cluster)

Change the context

kubectl config use-context tap-view-admin

Register Package Repository for TAP

kubectl create ns tap-install

tanzu secret registry add tap-registry \
  --username "${SERVICE_PRINCIPAL_RO_USERNAME}" \
  --password "${SERVICE_PRINCIPAL_RO_PASSWORD}" \
  --server ${ACR_NAME}.azurecr.io \
  --export-to-all-namespaces \
  --yes \
  --namespace tap-install

tanzu package repository add tanzu-tap-repository \
  --url ${ACR_NAME}.azurecr.io/tanzu-application-platform/tap-packages:1.3.3 \
  --namespace tap-install

Create Public IP for Envoy

az network public-ip create --resource-group tap-view --location japaneast --name envoy-ip --sku Standard --allocation-method static
ENVOY_IP_VIEW=$(az network public-ip show --resource-group tap-view --name envoy-ip --query ipAddress --output tsv)

Check the public ips

$ az network public-ip list -o table
Name                                  ResourceGroup                     Location    Zones    Address        IdleTimeoutInMinutes    ProvisioningState
------------------------------------  --------------------------------  ----------  -------  -------------  ----------------------  -------------------
046fb436-2a94-4258-98c7-936672972a31  MC_tap-build_tap-build_japaneast  japaneast   123      4.241.144.189  30                      Succeeded
48543263-ed47-41cd-abc3-cb23671407d6  MC_tap-run_tap-run_japaneast      japaneast   123      20.27.128.119  30                      Succeeded
fd09c63c-f277-498e-8338-4cdfda21c7ba  MC_tap-view_tap-view_japaneast    japaneast   123      52.155.119.68  30                      Succeeded
envoy-ip                              tap-view                          japaneast            20.89.90.83    4                       Succeeded

Creating overlay files

Create overlay files to reduce post-installation works.

DOMAIN_NAME_VIEW=view.$(echo ${ENVOY_IP_VIEW} | sed 's/\./-/g').sslip.io

mkdir -p overlays/view


cat <<EOF > overlays/view/contour-default-tls.yaml                                                                                                                                                                                                                          
#@ load("@ytt:data", "data")
#@ load("@ytt:overlay", "overlay")
#@ namespace = data.values.namespace
---
apiVersion: v1
kind: Secret
metadata:
  name: default-ca
  namespace: #@ namespace
type: kubernetes.io/tls
stringData:
  tls.crt: |
$(cat certs/ca.crt | sed 's/^/    /g')
  tls.key: |
$(cat certs/ca.key | sed 's/^/    /g')
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: default-ca-issuer
  namespace: #@ namespace
spec:
  ca:
    secretName: default-ca
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: tap-default-tls
  namespace: #@ namespace
spec:
  dnsNames:
  - #@ "*.${DOMAIN_NAME_VIEW}"
  issuerRef:
    kind: Issuer
    name: default-ca-issuer
  secretName: tap-default-tls
---
apiVersion: projectcontour.io/v1
kind: TLSCertificateDelegation
metadata:
  name: contour-delegation
  namespace: #@ namespace
spec:
  delegations:
  - secretName: tap-default-tls
    targetNamespaces:
    - "*"
EOF


cat <<'EOF' > overlays/view/tap-gui-db.yaml
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"Deployment","metadata":{"name":"server"}})
---
spec:
  #@overlay/match missing_ok=True
  template:
    spec:
      containers:
      #@overlay/match by="name"
      - name: backstage
        #@overlay/match missing_ok=True
        envFrom:
         - secretRef:
             name: tap-gui-db
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tap-gui-db
  namespace: tap-gui
  labels:
    app.kubernetes.io/part-of: tap-gui-db
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tap-gui-db
  namespace: tap-gui
  labels:
    app.kubernetes.io/part-of: tap-gui-db
spec:
  selector:
    matchLabels:
      app.kubernetes.io/part-of: tap-gui-db
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app.kubernetes.io/part-of: tap-gui-db
    spec:
      initContainers:
      - name: remove-lost-found
        image: busybox
        command:
        - sh
        - -c
        - |
          rm -fr /var/lib/postgresql/data/lost+found
        volumeMounts:
        - name: tap-gui-db
          mountPath: /var/lib/postgresql/data
      containers:
      - image: postgres:14-alpine
        name: postgres
        envFrom:
        - secretRef:
            name: tap-gui-db
        ports:
        - containerPort: 5432
          name: tap-gui-db
        volumeMounts:
        - name: tap-gui-db
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: tap-gui-db
        persistentVolumeClaim:
          claimName: tap-gui-db
---
apiVersion: v1
kind: Service
metadata:
  name: tap-gui-db
  namespace: tap-gui
  labels:
    app.kubernetes.io/part-of: tap-gui-db
spec:
  ports:
  - port: 5432
  selector:
    app.kubernetes.io/part-of: tap-gui-db
---
apiVersion: secretgen.k14s.io/v1alpha1
kind: Password
metadata:
  name: tap-gui-db
  namespace: tap-gui
  labels:
    app.kubernetes.io/part-of: tap-gui-db
spec:
  secretTemplate:
    type: Opaque
    stringData:
      POSTGRES_USER: tap-gui
      POSTGRES_PASSWORD: $(value)
EOF

cat <<EOF > overlays/view/metadata-store-read-only-client.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metadata-store-ready-only
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metadata-store-read-only
subjects:
- kind: ServiceAccount
  name: metadata-store-read-client
  namespace: metadata-store
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metadata-store-read-client
  namespace: metadata-store
automountServiceAccountToken: false
---
apiVersion: v1
kind: Secret
metadata:
  name: metadata-store-read-client
  namespace: metadata-store
  annotations:
    kubernetes.io/service-account.name: metadata-store-read-client
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: metadata-store-read-client-secret-read
  namespace: metadata-store
rules:
- apiGroups: [ "" ]
  resources: [ "secrets" ]
  resourceNames: [ "metadata-store-read-client" ]
  verbs: [ "get" ]
EOF

Create secrets for each overlay file

kubectl -n tap-install create secret generic contour-default-tls \
  -o yaml \
  --dry-run=client \
  --from-file=overlays/view/contour-default-tls.yaml \
  | kubectl apply -f-

kubectl -n tap-install create secret generic tap-gui-db \
  -o yaml \
  --dry-run=client \
  --from-file=overlays/view/tap-gui-db.yaml \
  | kubectl apply -f-

kubectl -n tap-install create secret generic metadata-store-read-only-client \
  -o yaml \
  --dry-run=client \
  --from-file=overlays/view/metadata-store-read-only-client.yaml \
  | kubectl apply -f-

Install the View Profile

Create the tap-values.yaml. The "CHAGEME" part will be updated later.

cat <<EOF > tap-values-view.yaml
profile: view

ceip_policy_disclosed: true

shared:
  ingress_domain: ${DOMAIN_NAME_VIEW}
  ca_cert_data: |
$(cat certs/ca.crt | sed 's/^/    /g')

contour:
  infrastructure_provider: azure
  contour:
    configFileContents:
      accesslog-format: json  
  envoy:
    service:
      type: LoadBalancer
      loadBalancerIP: ${ENVOY_IP_VIEW}      
      annotations:
        service.beta.kubernetes.io/azure-load-balancer-resource-group: tap-view

tap_gui:
  service_type: ClusterIP
  tls:
    secretName: tap-default-tls
    namespace: tanzu-system-ingress
  app_config:
    backend:
      database:
        client: pg
        connection:
          host: \${TAP_GUI_DB_SERVICE_HOST}
          port: \${TAP_GUI_DB_SERVICE_PORT}
          user: \${POSTGRES_USER}
          password: \${POSTGRES_PASSWORD}
    kubernetes:
      serviceLocatorMethod:
        type: multiTenant
      clusterLocatorMethods:
      - type: config
        clusters:
        - url: $(cat tap-gui/cluster-url-run)
          name: run
          authProvider: serviceAccount
          serviceAccountToken: $(cat tap-gui/cluster-token-run)
          skipTLSVerify: false
          caData: $(cat tap-gui/cluster-ca-run)
      - type: config
        clusters:
        - url: $(cat tap-gui/cluster-url-build)
          name: build
          authProvider: serviceAccount
          serviceAccountToken: $(cat tap-gui/cluster-token-build)
          skipTLSVerify: false
          caData: $(cat tap-gui/cluster-ca-build)
    proxy:
      /metadata-store:
        target: https://metadata-store-app.metadata-store:8443/api/v1
        changeOrigin: true
        secure: false
        headers:
          Authorization: "Bearer CHANGEME"
          X-Custom-Source: project-star
appliveview:
  ingressEnabled: true
  tls:
    secretName: tap-default-tls
    namespace: tanzu-system-ingress

accelerator:
  ingress:
    include: true    
    enable_tls: true  
  tls:
    secret_name: tap-default-tls
    namespace: tanzu-system-ingress

metadata_store:
  ns_for_export_app_cert: "*"

package_overlays:
- name: contour
  secrets:
  - name: contour-default-tls
- name: tap-gui
  secrets:
  - name: tap-gui-db
- name: metadata-store
  secrets:
  - name: metadata-store-read-only-client

excluded_packages:
- learningcenter.tanzu.vmware.com
- workshops.learningcenter.tanzu.vmware.com
- api-portal.tanzu.vmware.com
EOF

Install TAP

tanzu package install tap \
  -p tap.tanzu.vmware.com \
  -v 1.3.3 \
  --values-file tap-values-view.yaml \
  -n tap-install \
  --wait=false

Wait until the installation succeeds. It takes about 5 minutes.

while [ "$(kubectl -n tap-install get app tap -o=jsonpath='{.status.friendlyDescription}')" != "Reconcile succeeded" ];do
  date
  kubectl get app -n tap-install
  echo "---------------------------------------------------------------------"
  sleep 30
done
echo "✅ Install succeeded"

Check the packageinstalls

$ kubectl get packageinstall -n tap-install 
NAME                       PACKAGE NAME                                PACKAGE VERSION   DESCRIPTION           AGE
accelerator                accelerator.apps.tanzu.vmware.com           1.3.2             Reconcile succeeded   3m19s
appliveview                backend.appliveview.tanzu.vmware.com        1.3.1             Reconcile succeeded   3m19s
cert-manager               cert-manager.tanzu.vmware.com               1.7.2+tap.1       Reconcile succeeded   5m19s
contour                    contour.tanzu.vmware.com                    1.22.0+tap.5      Reconcile succeeded   5m5s
fluxcd-source-controller   fluxcd.source.controller.tanzu.vmware.com   0.27.0+tap.1      Reconcile succeeded   5m19s
metadata-store             metadata-store.apps.tanzu.vmware.com        1.3.4             Reconcile succeeded   3m19s
source-controller          controller.source.apps.tanzu.vmware.com     0.5.1             Reconcile succeeded   5m5s
tap                        tap.tanzu.vmware.com                        1.3.3             Reconcile succeeded   5m20s
tap-gui                    tap-gui.tanzu.vmware.com                    1.3.4             Reconcile succeeded   3m19s
tap-telemetry              tap-telemetry.tanzu.vmware.com              0.3.2             Reconcile succeeded   5m19s

Get the generated access token to Metadata Store, and configure it in tap-values.yaml, then update the packageinstall.

sed -i.bak "s/CHANGEME/$(kubectl get secret -n metadata-store metadata-store-read-client -otemplate='{{.data.token | base64decode}}')/" tap-values-view.yaml
tanzu package installed update -n tap-install tap -f tap-values-view.yaml

Check HTTPProxy to see accessible URLs.

$ kubectl get httpproxy -A
NAMESPACE            NAME                     FQDN                                       TLS SECRET                             STATUS   STATUS DESCRIPTION
accelerator-system   accelerator              accelerator.view.20-89-90-83.sslip.io      tanzu-system-ingress/tap-default-tls   valid    Valid HTTPProxy
app-live-view        appliveview              appliveview.view.20-89-90-83.sslip.io      tanzu-system-ingress/tap-default-tls   valid    Valid HTTPProxy
metadata-store       metadata-store-ingress   metadata-store.view.20-89-90-83.sslip.io   ingress-cert                           valid    Valid HTTPProxy
tap-gui              tap-gui                  tap-gui.view.20-89-90-83.sslip.io          tanzu-system-ingress/tap-default-tls   valid    Valid HTTPProx

Access the TAP GUI

image

Save the environment variables to env.sh.

cat <<EOF >> env.sh
export ENVOY_IP_VIEW=${ENVOY_IP_VIEW}
export DOMAIN_NAME_VIEW=${DOMAIN_NAME_VIEW}
EOF

Install Tanzu Application Platform (Build Cluster)

Change the context

kubectl config use-context tap-build-admin

Register Package Repository for TAP

kubectl create ns tap-install

tanzu secret registry add tap-registry \
  --username "${SERVICE_PRINCIPAL_RO_USERNAME}" \
  --password "${SERVICE_PRINCIPAL_RO_PASSWORD}" \
  --server ${ACR_NAME}.azurecr.io \
  --export-to-all-namespaces \
  --yes \
  --namespace tap-install

tanzu package repository add tanzu-tap-repository \
  --url ${ACR_NAME}.azurecr.io/tanzu-application-platform/tap-packages:1.3.3 \
  --namespace tap-install

tanzu package repository add tbs-full-deps-repository \
  --url ${ACR_NAME}.azurecr.io/tanzu-application-platform/full-tbs-deps-package-repo:1.7.4 \
  --namespace tap-install

Create the default repository secret for kpack

cat <<'EOF' > create-repository-secret.sh
#!/bin/bash
REGISTRY_SERVER=$1
REGUSTRY_USERNAME=$2
REGUSTRY_PASSWORD=$3
cat <<SCRIPT | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: repository-secret
  namespace: tap-install
type: kubernetes.io/dockerconfigjson
stringData:
  .dockerconfigjson: |
    {
      "auths": {
        "${REGISTRY_SERVER}": {
          "username": "${REGUSTRY_USERNAME}",
          "password": "${REGUSTRY_PASSWORD}"
        }
      }
    }
SCRIPT
EOF
chmod +x create-repository-secret.sh

Create Secret with Read Write Service Principal

./create-repository-secret.sh ${ACR_NAME}.azurecr.io ${SERVICE_PRINCIPAL_RW_USERNAME} ${SERVICE_PRINCIPAL_RW_PASSWORD}

Creating overlay files

Create overlay files to reduce post-installation works.

mkdir -p overlays/build

cat <<EOF > overlays/build/metadata-store-secrets.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: metadata-store-secrets
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: store-ca-cert
  namespace: metadata-store-secrets
stringData:
  ca.crt: |
$(kubectl get secret -n metadata-store ingress-cert -otemplate='{{index .data "ca.crt" | base64decode}}' --context tap-view-admin | sed 's/^/    /g')
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: store-auth-token
  namespace: metadata-store-secrets
stringData:
  auth_token: $(kubectl get secret -n metadata-store metadata-store-read-write-client -otemplate='{{.data.token | base64decode}}' --context tap-view-admin)
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretExport
metadata:
  name: store-ca-cert
  namespace: metadata-store-secrets
spec:
  toNamespace: "*"
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretExport
metadata:
  name: store-auth-token
  namespace: metadata-store-secrets
spec:
  toNamespace: "*"
EOF

Create secrets for each overlay file

kubectl -n tap-install create secret generic metadata-store-secrets \
  -o yaml \
  --dry-run=client \
  --from-file=overlays/build/metadata-store-secrets.yaml \
  | kubectl apply -f-

Install the Build Profile

Create the tap-values.yaml. The Grype package will be installed for each namespace later, so exclude it here.

cat <<EOF > tap-values-build.yaml
profile: build

shared:
  image_registry:
    project_path: "${ACR_NAME}.azurecr.io/tanzu-application-platform"
    secret:
      name: repository-secret
      namespace: tap-install
  ca_cert_data: |
$(cat certs/ca.crt | sed 's/^/    /g')

ceip_policy_disclosed: true

buildservice: 
  exclude_dependencies: true

supply_chain: testing_scanning

scanning:
  metadataStore:
    url: ""

package_overlays:
- name: ootb-supply-chain-testing-scanning
  secrets:
  - name: metadata-store-secrets

excluded_packages:
- grype.scanning.apps.tanzu.vmware.com
- contour.tanzu.vmware.com
EOF

Install TAP and TBS full dependencies

tanzu package install tap \
  -p tap.tanzu.vmware.com \
  -v 1.3.3 \
  --values-file tap-values-build.yaml \
  -n tap-install \
  --wait=false

tanzu package install full-tbs-deps \
  -p full-tbs-deps.tanzu.vmware.com \
  -v 1.7.4 \
  -n tap-install \
  --wait=false

Wait until the installation succeeds. It takes about 5 minutes.

while [ "$(kubectl -n tap-install get app tap -o=jsonpath='{.status.friendlyDescription}')" != "Reconcile succeeded" ];do
  date
  kubectl get app -n tap-install
  echo "---------------------------------------------------------------------"
  sleep 30
done
while [ "$(kubectl -n tap-install get app full-tbs-deps -o=jsonpath='{.status.friendlyDescription}')" != "Reconcile succeeded" ];do
  date
  kubectl get app -n tap-install
  echo "---------------------------------------------------------------------"
  sleep 30
done
echo "✅ Install succeeded"

Check the packageinstalls

$ tanzu package installed list -n tap-install                              

  NAME                                PACKAGE-NAME                                         PACKAGE-VERSION  STATUS               
  appliveview-conventions             conventions.appliveview.tanzu.vmware.com             1.3.1            Reconcile succeeded  
  buildservice                        buildservice.tanzu.vmware.com                        1.7.4            Reconcile succeeded  
  cartographer                        cartographer.tanzu.vmware.com                        0.5.4            Reconcile succeeded  
  cert-manager                        cert-manager.tanzu.vmware.com                        1.7.2+tap.1      Reconcile succeeded  
  conventions-controller              controller.conventions.apps.tanzu.vmware.com         0.7.1            Reconcile succeeded  
  fluxcd-source-controller            fluxcd.source.controller.tanzu.vmware.com            0.27.0+tap.1     Reconcile succeeded  
  full-tbs-deps                       full-tbs-deps.tanzu.vmware.com                       1.7.4            Reconcile succeeded  
  ootb-supply-chain-testing-scanning  ootb-supply-chain-testing-scanning.tanzu.vmware.com  0.10.5           Reconcile succeeded  
  ootb-templates                      ootb-templates.tanzu.vmware.com                      0.10.5           Reconcile succeeded  
  scanning                            scanning.apps.tanzu.vmware.com                       1.3.1            Reconcile succeeded  
  source-controller                   controller.source.apps.tanzu.vmware.com              0.5.1            Reconcile succeeded  
  spring-boot-conventions             spring-boot-conventions.tanzu.vmware.com             0.5.0            Reconcile succeeded  
  tap                                 tap.tanzu.vmware.com                                 1.3.3            Reconcile succeeded  
  tap-auth                            tap-auth.tanzu.vmware.com                            1.1.0            Reconcile succeeded  
  tap-telemetry                       tap-telemetry.tanzu.vmware.com                       0.3.2            Reconcile succeeded  
  tekton-pipelines                    tekton.tanzu.vmware.com                              0.39.0+tap.2     Reconcile succeeded 

Make sure ClusterBuilders are READY

$ kubectl get clusterbuilder
NAME         LATESTIMAGE                                                                                                                                                     READY
base         tap16979.azurecr.io/tanzu-application-platform/buildservice:clusterbuilder-base@sha256:2f44fc24b1c278dfc8feaa8f19cc33330234a05a414341d9022b468fbd0d6fdb         True
base-jammy   tap16979.azurecr.io/tanzu-application-platform/buildservice:clusterbuilder-base-jammy@sha256:947457234ce80b7aac02e65fdaf2bf28bba39d5ee8fab098eb711f5586d6d633   True
default      tap16979.azurecr.io/tanzu-application-platform/buildservice:clusterbuilder-default@sha256:2f44fc24b1c278dfc8feaa8f19cc33330234a05a414341d9022b468fbd0d6fdb      True
full         tap16979.azurecr.io/tanzu-application-platform/buildservice:clusterbuilder-full@sha256:0f6556997d6d4237197b996f09175f583ea92485637cae7d95655cbfc5a27c8f         True
full-jammy   tap16979.azurecr.io/tanzu-application-platform/buildservice:clusterbuilder-full-jammy@sha256:8926b70651b1066df57e2b22f0f52756e228b3168d28519bdfff3147820274c8   True
tiny         tap16979.azurecr.io/tanzu-application-platform/buildservice:clusterbuilder-tiny@sha256:48294b40310c04e8f3016b969b8a2c725a133ea78e41a08ea5c6a857eb5a8d64         True
tiny-jammy   tap16979.azurecr.io/tanzu-application-platform/buildservice:clusterbuilder-tiny-jammy@sha256:52e8e04310a17fe68e99c5e282a035ab100bf7368f2a2d494912e88bd5696ba8   True

Install Tanzu Application Platform (Run Cluster)

Change the context

kubectl config use-context tap-run-admin

Register Package Repository for TAP

kubectl create ns tap-install

tanzu secret registry add tap-registry \
  --username "${SERVICE_PRINCIPAL_RO_USERNAME}" \
  --password "${SERVICE_PRINCIPAL_RO_PASSWORD}" \
  --server ${ACR_NAME}.azurecr.io \
  --export-to-all-namespaces \
  --yes \
  --namespace tap-install

tanzu package repository add tanzu-tap-repository \
  --url ${ACR_NAME}.azurecr.io/tanzu-application-platform/tap-packages:1.3.3 \
  --namespace tap-install

Create Public IP for Envoy

az network public-ip create --resource-group tap-run --location japaneast --name envoy-ip --sku Standard --allocation-method static
ENVOY_IP_RUN=$(az network public-ip show --resource-group tap-run --name envoy-ip --query ipAddress --output tsv)

Check the public ips

$ az network public-ip list -o table
Name                                  ResourceGroup                     Location    Zones    Address        IdleTimeoutInMinutes    ProvisioningState
------------------------------------  --------------------------------  ----------  -------  -------------  ----------------------  -------------------
046fb436-2a94-4258-98c7-936672972a31  MC_tap-build_tap-build_japaneast  japaneast   123      4.241.144.189  30                      Succeeded
48543263-ed47-41cd-abc3-cb23671407d6  MC_tap-run_tap-run_japaneast      japaneast   123      20.27.128.119  30                      Succeeded
fd09c63c-f277-498e-8338-4cdfda21c7ba  MC_tap-view_tap-view_japaneast    japaneast   123      52.155.119.68  30                      Succeeded
envoy-ip                              tap-run                           japaneast            20.18.106.167  4                       Succeeded
envoy-ip                              tap-view                          japaneast            20.89.90.83    4                       Succeeded

Creating overlay files

Create overlay files to reduce post-installation works.

DOMAIN_NAME_RUN=run.$(echo ${ENVOY_IP_RUN} | sed 's/\./-/g').sslip.io

mkdir -p overlays/run


cat <<EOF > overlays/run/contour-default-tls.yaml                                                                                                                                                                                                                          
#@ load("@ytt:data", "data")
#@ load("@ytt:overlay", "overlay")
#@ namespace = data.values.namespace
---
apiVersion: v1
kind: Secret
metadata:
  name: default-ca
  namespace: #@ namespace
type: kubernetes.io/tls
stringData:
  tls.crt: |
$(cat certs/ca.crt | sed 's/^/    /g')
  tls.key: |
$(cat certs/ca.key | sed 's/^/    /g')
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: default-ca-issuer
  namespace: #@ namespace
spec:
  ca:
    secretName: default-ca
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: tap-default-tls
  namespace: #@ namespace
spec:
  dnsNames:
  - #@ "*.${DOMAIN_NAME_RUN}"
  issuerRef:
    kind: Issuer
    name: default-ca-issuer
  secretName: tap-default-tls
---
apiVersion: projectcontour.io/v1
kind: TLSCertificateDelegation
metadata:
  name: contour-delegation
  namespace: #@ namespace
spec:
  delegations:
  - secretName: tap-default-tls
    targetNamespaces:
    - "*"
EOF

cat <<EOF > overlays/run/cnrs-https.yaml
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"metadata":{"name":"config-network"}, "kind": "ConfigMap"})
---
data:
  #@overlay/match missing_ok=True
  default-external-scheme: https
  #@overlay/match missing_ok=True
  http-protocol: redirected
EOF

Create secrets for each overlay file

kubectl -n tap-install create secret generic contour-default-tls \
  -o yaml \
  --dry-run=client \
  --from-file=overlays/run/contour-default-tls.yaml \
  | kubectl apply -f-

kubectl -n tap-install create secret generic cnrs-https \
  -o yaml \
  --dry-run=client \
  --from-file=overlays/run/cnrs-https.yaml \
  | kubectl apply -f-

Install the Run Profile

Create the tap-values.yaml

cat <<EOF > tap-values-run.yaml
profile: run

ceip_policy_disclosed: true

shared:
  ingress_domain: ${DOMAIN_NAME_RUN}
  ca_cert_data: |
$(cat certs/ca.crt | sed 's/^/    /g')

contour:
  infrastructure_provider: azure
  contour:
    configFileContents:
      accesslog-format: json  
  envoy:
    service:
      type: LoadBalancer
      loadBalancerIP: ${ENVOY_IP_RUN}      
      annotations:
        service.beta.kubernetes.io/azure-load-balancer-resource-group: tap-run

cnrs:
  domain_template: "{{.Name}}-{{.Namespace}}.{{.Domain}}"
  default_tls_secret: tanzu-system-ingress/tap-default-tls

appliveview_connector:
  backend:
    ingressEnabled: true
    host: appliveview.${DOMAIN_NAME_VIEW}

api_auto_registration:
  tap_gui_url: https://tap-gui.${DOMAIN_NAME_VIEW}
  cluster_name: run

accelerator:
  ingress:
    include: true    
    enable_tls: true  
  tls:
    secret_name: tap-default-tls
    namespace: tanzu-system-ingress

package_overlays:
- name: contour
  secrets:
  - name: contour-default-tls
- name: cnrs
  secrets:
  - name: cnrs-https

excluded_packages:
- image-policy-webhook.signing.apps.tanzu.vmware.com
- eventing.tanzu.vmware.com
EOF

Install TAP

tanzu package install tap \
  -p tap.tanzu.vmware.com \
  -v 1.3.3 \
  --values-file tap-values-run.yaml \
  -n tap-install \
  --wait=false

Wait until the installation succeeds. It takes about 5 minutes.

while [ "$(kubectl -n tap-install get app tap -o=jsonpath='{.status.friendlyDescription}')" != "Reconcile succeeded" ];do
  date
  kubectl get app -n tap-install
  echo "---------------------------------------------------------------------"
  sleep 30
done
echo "✅ Install succeeded"

Check the packageinstalls

$ tanzu package installed list -n tap-install  

  NAME                      PACKAGE-NAME                               PACKAGE-VERSION  STATUS               
  api-auto-registration     apis.apps.tanzu.vmware.com                 0.1.2            Reconcile succeeded  
  appliveview-connector     connector.appliveview.tanzu.vmware.com     1.3.1            Reconcile succeeded  
  appsso                    sso.apps.tanzu.vmware.com                  2.0.0            Reconcile succeeded  
  cartographer              cartographer.tanzu.vmware.com              0.5.4            Reconcile succeeded  
  cert-manager              cert-manager.tanzu.vmware.com              1.7.2+tap.1      Reconcile succeeded  
  cnrs                      cnrs.tanzu.vmware.com                      2.0.2            Reconcile succeeded  
  contour                   contour.tanzu.vmware.com                   1.22.0+tap.5     Reconcile succeeded  
  fluxcd-source-controller  fluxcd.source.controller.tanzu.vmware.com  0.27.0+tap.1     Reconcile succeeded  
  ootb-delivery-basic       ootb-delivery-basic.tanzu.vmware.com       0.10.5           Reconcile succeeded  
  ootb-templates            ootb-templates.tanzu.vmware.com            0.10.5           Reconcile succeeded  
  policy-controller         policy.apps.tanzu.vmware.com               1.1.3            Reconcile succeeded  
  service-bindings          service-bindings.labs.vmware.com           0.8.1            Reconcile succeeded  
  services-toolkit          services-toolkit.tanzu.vmware.com          0.8.1            Reconcile succeeded  
  source-controller         controller.source.apps.tanzu.vmware.com    0.5.1            Reconcile succeeded  
  tap                       tap.tanzu.vmware.com                       1.3.3            Reconcile succeeded  
  tap-auth                  tap-auth.tanzu.vmware.com                  1.1.0            Reconcile succeeded  
  tap-telemetry             tap-telemetry.tanzu.vmware.com             0.3.2            Reconcile succeeded 

Save the environment variables to env.sh.

cat <<EOF >> env.sh
export ENVOY_IP_RUN=${ENVOY_IP_RUN}
export DOMAIN_NAME_RUN=${DOMAIN_NAME_RUN}
EOF

Set up namespaces

https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.3/tap/GUID-set-up-namespaces.html

Create rbac.yaml for a workload

cat <<EOF > rbac.yaml
apiVersion: v1
kind: Secret
metadata:
  name: tap-registry
  annotations:
    secretgen.carvel.dev/image-pull-secret: ""
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: e30K
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: default
secrets:
- name: registry-credentials
imagePullSecrets:
- name: registry-credentials
- name: tap-registry
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: default-permit-deliverable
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: deliverable
subjects:
- kind: ServiceAccount
  name: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: default-permit-workload
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: workload
subjects:
- kind: ServiceAccount
  name: default
EOF

Set up in the Run Cluster

kubectl config use-context tap-run-admin
NAMESPACE=demo
kubectl create ns ${NAMESPACE}
kubectl apply -f rbac.yaml -n ${NAMESPACE}
tanzu secret registry add registry-credentials --server ${ACR_NAME}.azurecr.io --username ${SERVICE_PRINCIPAL_RO_USERNAME} --password ${SERVICE_PRINCIPAL_RO_PASSWORD} --namespace ${NAMESPACE}

Set up in the Build Cluster

kubectl config use-context tap-build-admin
NAMESPACE=demo
kubectl create ns ${NAMESPACE}
kubectl apply -f rbac.yaml -n ${NAMESPACE}
tanzu secret registry add registry-credentials --server ${ACR_NAME}.azurecr.io --username ${SERVICE_PRINCIPAL_RW_USERNAME} --password ${SERVICE_PRINCIPAL_RW_PASSWORD} --namespace ${NAMESPACE}

Create a Tekton pipeline (skip the tests here)

cat <<'EOF' > pipeline-maven.yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: maven-test-pipeline
  labels:
    apps.tanzu.vmware.com/pipeline: test
    apps.tanzu.vmware.com/language: java
spec:
  params:
  - name: source-url
  - name: source-revision
  tasks:
  - name: test
    params:
    - name: source-url
      value: $(params.source-url)
    - name: source-revision
      value: $(params.source-revision)
    taskSpec:
      params:
      - name: source-url
      - name: source-revision
      steps:
      - name: test
        image: eclipse-temurin:17
        script: |-
          set -ex
          cd `mktemp -d`
          curl -s $(params.source-url) | tar -m -xzvf -
          # ./mvnw clean test -V --no-transfer-progress
          echo 'Skip'
EOF

kubectl apply -f pipeline-maven.yaml -n ${NAMESPACE}

Create a Scan Policy

cat <<EOF > scan-policy.yaml
apiVersion: scanning.apps.tanzu.vmware.com/v1beta1
kind: ScanPolicy
metadata:
  name: scan-policy
spec:
  regoFile: |
    package main
    # Accepted Values: "Critical", "High", "Medium", "Low", "Negligible", "UnknownSeverity"
    notAllowedSeverities := ["Critical", "High", "UnknownSeverity"]
    ignoreCves := ["CVE-2016-1000027", "GHSA-36p3-wjmg-h94x"]
    contains(array, elem) = true {
      array[_] = elem
    } else = false { true }
    isSafe(match) {
      severities := { e | e := match.ratings.rating.severity } | { e | e := match.ratings.rating[_].severity }
      some i
      fails := contains(notAllowedSeverities, severities[i])
      not fails
    }
    isSafe(match) {
      ignore := contains(ignoreCves, match.id)
      ignore
    }
    deny[msg] {
      comps := { e | e := input.bom.components.component } | { e | e := input.bom.components.component[_] }
      some i
      comp := comps[i]
      vulns := { e | e := comp.vulnerabilities.vulnerability } | { e | e := comp.vulnerabilities.vulnerability[_] }
      some j
      vuln := vulns[j]
      ratings := { e | e := vuln.ratings.rating.severity } | { e | e := vuln.ratings.rating[_].severity }
      not isSafe(vuln)
      msg = sprintf("CVE %s %s %s", [comp.name, vuln.id, ratings])
    }
EOF

kubectl apply -f scan-policy.yaml -n ${NAMESPACE}

Install Grype Scanner

NAMESPACE=demo
cat <<EOF > grype-${NAMESPACE}-values.yaml
namespace: ${NAMESPACE}
targetImagePullSecret: registry-credentials
metadataStore:
  url: https://metadata-store.${DOMAIN_NAME_VIEW}
  caSecret:
    name: store-ca-cert
    importFromNamespace: metadata-store-secrets
  authSecret:
    name: store-auth-token
    importFromNamespace: metadata-store-secrets
EOF
tanzu package install -n tap-install grype-${NAMESPACE} -p grype.scanning.apps.tanzu.vmware.com -v 1.3.1 -f grype-${NAMESPACE}-values.yaml

Create a Workload

Switch the context to the Build Cluster.

kubectl config use-context tap-build-admin

Create a Workload

tanzu apps workload apply tanzu-java-web-app \
  --app tanzu-java-web-app \
  --git-repo https://github.com/making/tanzu-java-web-app \
  --git-branch main \
  --type web \
  --label apps.tanzu.vmware.com/has-tests=true \
  --annotation autoscaling.knative.dev/minScale=1 \
  --request-memory 768Mi \
  -n ${NAMESPACE} \
  -y

Wait until all resources in the supply chain are HEALTHY.

$ tanzu apps workload get -n ${NAMESPACE} tanzu-java-web-app 
📡 Overview
   name:   tanzu-java-web-app
   type:   web

💾 Source
   type:     git
   url:      https://github.com/sample-accelerators/tanzu-java-web-app
   branch:   main

📦 Supply Chain
   name:   source-test-scan-to-url

   RESOURCE           READY   HEALTHY   TIME    OUTPUT
   source-provider    True    True      7m14s   GitRepository/tanzu-java-web-app
   source-tester      True    True      6m42s   Runnable/tanzu-java-web-app
   source-scanner     True    True      5m56s   SourceScan/tanzu-java-web-app
   image-provider     True    True      3m38s   Image/tanzu-java-web-app
   image-scanner      True    True      2m58s   ImageScan/tanzu-java-web-app
   config-provider    True    True      2m57s   PodIntent/tanzu-java-web-app
   app-config         True    True      2m57s   ConfigMap/tanzu-java-web-app
   service-bindings   True    True      2m57s   ConfigMap/tanzu-java-web-app-with-claims
   api-descriptors    True    True      2m56s   ConfigMap/tanzu-java-web-app-with-api-descriptors
   config-writer      True    True      2m39s   Runnable/tanzu-java-web-app-config-writer
   deliverable        True    True      7m15s   ConfigMap/tanzu-java-web-app

🚚 Delivery

   Delivery resources not found.

💬 Messages
   No messages found.

🛶 Pods
   NAME                                         READY   STATUS      RESTARTS   AGE
   scan-tanzu-java-web-app-d2xll-bl7h6          0/1     Completed   0          6m43s
   scan-tanzu-java-web-app-v5m59-8trpp          0/1     Completed   0          3m39s
   tanzu-java-web-app-build-1-build-pod         0/1     Completed   0          5m56s
   tanzu-java-web-app-config-writer-q56zf-pod   0/1     Completed   0          2m56s
   tanzu-java-web-app-qgqlt-test-pod            0/1     Completed   0          7m12s

To see logs: "tanzu apps workload tail tanzu-java-web-app --namespace demo"

Get the generated Deliverable.

kubectl get cm -n ${NAMESPACE} tanzu-java-web-app -otemplate='{{.data.deliverable}}' > deliverable.yaml

Switch the context to the Run Cluster and apply this Deliverable.

kubectl config use-context tap-run-admin
kubectl apply -f deliverable.yaml -n ${NAMESPACE}

As of TAP 1.3.3, labels for displaying Run Cluster information in the Supply Chain Plugin of the TAP GUI are missing in the generated Deliverable, so patch these labels.

kubectl patch deliverable tanzu-java-web-app -n ${NAMESPACE} --type merge --patch "{\"metadata\":{\"labels\":{\"carto.run/workload-name\":\"tanzu-java-web-app\",\"carto.run/workload-namespace\":\"${NAMESPACE}\"}}}"

Check the workload in the TAP GUI.

image

Check the sucurity analysis.

image

Get the KService information and check the app URL.

$ kubectl get ksvc -n demo tanzu-java-web-app 
NAME                 URL                                                          LATESTCREATED              LATESTREADY                READY   REASON
tanzu-java-web-app   https://tanzu-java-web-app-demo.run.20-18-106-167.sslip.io   tanzu-java-web-app-00001   tanzu-java-web-app-00001   True 

Access the app.

$ curl -k $(kubectl get ksvc -n demo tanzu-java-web-app -ojsonpath='{.status.url}')
Greetings from Spring Boot + Tanzu!

Register the catalog for tanzu-java-web-app.

Click "REGISTER ENTITY".

image

Put "https://github.com/making/tanzu-java-web-app/blob/main/catalog/catalog-info.yaml" to the "Repository URL" and click "ANALYZE" and "IMPORT" buttons.

image

Click "Runtime Resources" tab.

image

Click the "Running" Pod.

image

Scroll down to the "Live View" panel.

image

You'll see the live information for tanzu-java-web-app.

image

Select "Memory"

image

✒️️ Edit  ⏰ History  🗑 Delete