IK.AM

@making's tech note


Tanzu Application Platform 1.1 (Iterate Profile) をAKSにインストールしAzure ADと連携するメモ

🗃 {Dev/CaaS/Kubernetes/TAP}
🏷 Kubernetes 🏷 Cartographer 🏷 AKS 🏷 Tanzu 🏷 TAP 🏷 Knative 🏷 Azure 
🗓 Updated at 2022-07-10T17:37:20Z  🗓 Created at 2022-07-09T14:21:54Z   🌎 English Page

⚠️ 本記事の内容はVMwareによってサポートされていません。 記事の内容で生じた問題については自己責任で対応し、 VMwareサポート窓口には問い合わせないでください

Tanzu Application Platform 1.1 をAKSにインストールします。

本記事ではTAPをInstallし、"Hello World"なアプリケーションをソースコードからデプロイする機能("Source to URL")を試します。
コンテナレジストリにはACRを使用し、RBACをAzure ADと連携します。
また、HTTPSを有効にします。

目次

必要なCLI

以下のCLIは事前にインストール済みとします。

リソースグループ作成

az group create --name tap-rg --location japaneast

ACRインスタンスの作成

今回はACRのadminアカウントを使用します。

ACR_NAME=tap${RANDOM}
az acr create --resource-group tap-rg \
  --name ${ACR_NAME} --sku standard \
  --admin-enabled true
ACR_SERVER=${ACR_NAME}.azurecr.io
ACR_USERNAME=${ACR_NAME}
ACR_PASSWORD=$(az acr credential show --name ${ACR_NAME} --resource-group tap-rg --query 'passwords[0].value' --output tsv)

docker login ${ACR_SERVER} -u ${ACR_USERNAME} -p ${ACR_PASSWORD}

AKSクラスタの作成

AKSクラスタにはstandard_d2_v5 (2 vCPU, 8GB Memory)のWorker Nodeを使用し、cluster-autoscalerを有効にしておきます。またAzure ADとの連携を有効にします。

az aks create \
  --resource-group tap-rg \
  --name tap-sandbox \
  --node-count 1 \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 5 \
  --node-vm-size standard_d2_v5 \
  --load-balancer-sku standard \
  --zones 1 2 3 \
  --generate-ssh-keys \
  --attach-acr ${ACR_NAME} \
  --enable-aad

TAPのインストールはadminアカウントで行います。

az aks get-credentials --resource-group tap-rg --name tap-sandbox --admin --overwrite-existing
$ kubectl cluster-info
Kubernetes control plane is running at https://tap-sandbo-tap-rg-85cd83-818ed213.hcp.japaneast.azmk8s.io:443
CoreDNS is running at https://tap-sandbo-tap-rg-85cd83-818ed213.hcp.japaneast.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://tap-sandbo-tap-rg-85cd83-818ed213.hcp.japaneast.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get node -owide
NAME                                STATUS   ROLES   AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-nodepool1-42094893-vmss000001   Ready    agent   2m43s   v1.22.6   10.224.0.5    <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.5.11+azure-2

$ kubectl get pod -A  
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   azure-ip-masq-agent-djv6t             1/1     Running   0          2m55s
kube-system   cloud-node-manager-k8gqd              1/1     Running   0          2m55s
kube-system   coredns-autoscaler-7d56cd888-qp9c7    1/1     Running   0          4m21s
kube-system   coredns-dc97c5f55-8sqj8               1/1     Running   0          4m21s
kube-system   coredns-dc97c5f55-9h9n4               1/1     Running   0          115s
kube-system   csi-azuredisk-node-62lqz              3/3     Running   0          2m55s
kube-system   csi-azurefile-node-c2f9j              3/3     Running   0          2m55s
kube-system   konnectivity-agent-5697cf46cc-9vbmv   1/1     Running   0          84s
kube-system   konnectivity-agent-5697cf46cc-j4w8m   1/1     Running   0          81s
kube-system   kube-proxy-spf8p                      1/1     Running   0          2m55s
kube-system   metrics-server-64b66fbbc8-pdmz8       1/1     Running   0          4m21s

Tanzu CLIのインストール

# For Mac
pivnet download-product-files --product-slug='tanzu-application-platform' --release-version='1.1.2' --product-file-id=1228424
# For Linux
pivnet download-product-files --product-slug='tanzu-application-platform' --release-version='1.1.2' --product-file-id=1228427
# For Windows
pivnet download-product-files --product-slug='tanzu-application-platform' --release-version='1.1.2' --product-file-id=1228428
tar xvf tanzu-framework-*-amd64.tar
install cli/core/v0.11.6/tanzu-core-*_amd64 /usr/local/bin/tanzu
$ tanzu version
version: v0.11.6
buildDate: 2022-05-20
sha: 90440e2b

プラグインのインストール

export TANZU_CLI_NO_INIT=true
tanzu plugin install --local cli all

EULAのAccept

https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.1/tap/GUID-install-tanzu-cli.html#accept-the-end-user-license-agreements-0

以下のEULAを全てAcceptしてください。

image

Cluster Essentials for VMware Tanzuのインストール

pivnet login --api-token=<API TOKEN>

TAPのインストールに必要なKapp ControllerとSecretgen Controllerをデプロイするために Cluster Essentials for VMware Tanzu をインストールします。

# Mac
pivnet download-product-files --product-slug='tanzu-cluster-essentials' --release-version='1.1.0' --product-file-id=1191985
# Linux
pivnet download-product-files --product-slug='tanzu-cluster-essentials' --release-version='1.1.0' --product-file-id=1191987
# Windows
pivnet download-product-files --product-slug='tanzu-cluster-essentials' --release-version='1.1.0' --product-file-id=1191983
TANZUNET_USERNAME=...
TANZUNET_PASSWORD=...

mkdir tanzu-cluster-essentials
tar xzvf tanzu-cluster-essentials-*-amd64-1.1.0.tgz -C tanzu-cluster-essentials

export INSTALL_BUNDLE=registry.tanzu.vmware.com/tanzu-cluster-essentials/cluster-essentials-bundle:1.1.0
export INSTALL_REGISTRY_HOSTNAME=registry.tanzu.vmware.com
export INSTALL_REGISTRY_USERNAME=${TANZUNET_USERNAME}
export INSTALL_REGISTRY_PASSWORD=${TANZUNET_PASSWORD}
cd tanzu-cluster-essentials
./install.sh --yes
cd ..

Tanzu Application Platformのインストール

TAP用Package Repositoryの登録

TANZUNET_USERNAME=...
TANZUNET_PASSWORD=...

kubectl create ns tap-install

tanzu secret registry add tap-registry \
  --username "${TANZUNET_USERNAME}" \
  --password "${TANZUNET_PASSWORD}" \
  --server registry.tanzu.vmware.com \
  --export-to-all-namespaces \
  --yes \
  --namespace tap-install

tanzu package repository add tanzu-tap-repository \
  --url registry.tanzu.vmware.com/tanzu-application-platform/tap-packages:1.1.2 \
  --namespace tap-install
$ tanzu package available list --namespace tap-install
- Retrieving available packages... 
  NAME                                                 DISPLAY-NAME                                                              SHORT-DESCRIPTION                                                                                                                                              LATEST-VERSION  
  accelerator.apps.tanzu.vmware.com                    Application Accelerator for VMware Tanzu                                  Used to create new projects and configurations.                                                                                                                1.1.2           
  api-portal.tanzu.vmware.com                          API portal                                                                A unified user interface to enable search, discovery and try-out of API endpoints at ease.                                                                     1.0.21          
  backend.appliveview.tanzu.vmware.com                 Application Live View for VMware Tanzu                                    App for monitoring and troubleshooting running apps                                                                                                            1.1.2           
  build.appliveview.tanzu.vmware.com                   Application Live View Conventions for VMware Tanzu                        Application Live View convention server                                                                                                                        1.0.2           
  buildservice.tanzu.vmware.com                        Tanzu Build Service                                                       Tanzu Build Service enables the building and automation of containerized software workflows securely and at scale.                                             1.5.2           
  cartographer.tanzu.vmware.com                        Cartographer                                                              Kubernetes native Supply Chain Choreographer.                                                                                                                  0.3.0           
  cnrs.tanzu.vmware.com                                Cloud Native Runtimes                                                     Cloud Native Runtimes is a serverless runtime based on Knative                                                                                                 1.2.1           
  connector.appliveview.tanzu.vmware.com               Application Live View Connector for VMware Tanzu                          App for discovering and registering running apps                                                                                                               1.1.2           
  controller.conventions.apps.tanzu.vmware.com         Convention Service for VMware Tanzu                                       Convention Service enables app operators to consistently apply desired runtime configurations to fleets of workloads.                                          0.6.3           
  controller.source.apps.tanzu.vmware.com              Tanzu Source Controller                                                   Tanzu Source Controller enables workload create/update from source code.                                                                                       0.3.3           
  conventions.appliveview.tanzu.vmware.com             Application Live View Conventions for VMware Tanzu                        Application Live View convention server                                                                                                                        1.1.2           
  developer-conventions.tanzu.vmware.com               Tanzu App Platform Developer Conventions                                  Developer Conventions                                                                                                                                          0.6.0           
  fluxcd.source.controller.tanzu.vmware.com            Flux Source Controller                                                    The source-controller is a Kubernetes operator, specialised in artifacts acquisition from external sources such as Git, Helm repositories and S3 buckets.      0.16.4          
  grype.scanning.apps.tanzu.vmware.com                 Grype for Supply Chain Security Tools - Scan                              Default scan templates using Anchore Grype                                                                                                                     1.1.2           
  image-policy-webhook.signing.apps.tanzu.vmware.com   Image Policy Webhook                                                      Image Policy Webhook enables defining of a policy to restrict unsigned container images.                                                                       1.1.2           
  learningcenter.tanzu.vmware.com                      Learning Center for Tanzu Application Platform                            Guided technical workshops                                                                                                                                     0.2.1           
  metadata-store.apps.tanzu.vmware.com                 Supply Chain Security Tools - Store                                       Post SBoMs and query for image, package, and vulnerability metadata.                                                                                           1.1.3           
  ootb-delivery-basic.tanzu.vmware.com                 Tanzu App Platform Out of The Box Delivery Basic                          Out of The Box Delivery Basic.                                                                                                                                 0.7.1           
  ootb-supply-chain-basic.tanzu.vmware.com             Tanzu App Platform Out of The Box Supply Chain Basic                      Out of The Box Supply Chain Basic.                                                                                                                             0.7.1           
  ootb-supply-chain-testing-scanning.tanzu.vmware.com  Tanzu App Platform Out of The Box Supply Chain with Testing and Scanning  Out of The Box Supply Chain with Testing and Scanning.                                                                                                         0.7.1           
  ootb-supply-chain-testing.tanzu.vmware.com           Tanzu App Platform Out of The Box Supply Chain with Testing               Out of The Box Supply Chain with Testing.                                                                                                                      0.7.1           
  ootb-templates.tanzu.vmware.com                      Tanzu App Platform Out of The Box Templates                               Out of The Box Templates.                                                                                                                                      0.7.1           
  run.appliveview.tanzu.vmware.com                     Application Live View for VMware Tanzu                                    App for monitoring and troubleshooting running apps                                                                                                            1.0.3           
  scanning.apps.tanzu.vmware.com                       Supply Chain Security Tools - Scan                                        Scan for vulnerabilities and enforce policies directly within Kubernetes native Supply Chains.                                                                 1.1.2           
  service-bindings.labs.vmware.com                     Service Bindings for Kubernetes                                           Service Bindings for Kubernetes implements the Service Binding Specification.                                                                                  0.7.1           
  services-toolkit.tanzu.vmware.com                    Services Toolkit                                                          The Services Toolkit enables the management, lifecycle, discoverability and connectivity of Service Resources (databases, message queues, DNS records, etc.).  0.6.0           
  spring-boot-conventions.tanzu.vmware.com             Tanzu Spring Boot Conventions Server                                      Default Spring Boot convention server.                                                                                                                         0.4.0           
  tap-auth.tanzu.vmware.com                            Default roles for Tanzu Application Platform                              Default roles for Tanzu Application Platform                                                                                                                   1.0.1           
  tap-gui.tanzu.vmware.com                             Tanzu Application Platform GUI                                            web app graphical user interface for Tanzu Application Platform                                                                                                1.1.3           
  tap-telemetry.tanzu.vmware.com                       Telemetry Collector for Tanzu Application Platform                        Tanzu Application Plaform Telemetry                                                                                                                            0.1.4           
  tap.tanzu.vmware.com                                 Tanzu Application Platform                                                Package to install a set of TAP components to get you started based on your use case.                                                                          1.1.2           
  tekton.tanzu.vmware.com                              Tekton Pipelines                                                          Tekton Pipelines is a framework for creating CI/CD systems.                                                                                                    0.33.5          
  workshops.learningcenter.tanzu.vmware.com            Workshop Building Tutorial                                                Workshop Building Tutorial                                                                                                                                     0.2.1  

Iterate Profileのインストール

TAPをインストールするためのtap-values.ymlを作成します。 cnrs.domain_nameには仮のドメインを指定します。あとでenvoyのExternal IPが設定されてから変更します。

リソース節約のため、使用しないパッケージはexcluded_packages除外します。 また、Cloud Native RuntimesはKnative Servingしか使わないので、それ以外のリソースを削除するoverlayを設定します。

cat <<EOF > tap-values.yml
profile: iterate

ceip_policy_disclosed: true

cnrs:
  domain_name: tap.example.com
  domain_template: "{{.Name}}-{{.Namespace}}.{{.Domain}}"
  default_tls_secret: tanzu-system-ingress/cnrs-default-tls

buildservice:
  kp_default_repository: ${ACR_SERVER}/build-service
  kp_default_repository_username: ${ACR_USERNAME}
  kp_default_repository_password: ${ACR_PASSWORD}
  tanzunet_username: ${TANZUNET_USERNAME}
  tanzunet_password: ${TANZUNET_PASSWORD}
  descriptor_name: full

supply_chain: basic

ootb_supply_chain_basic:
  registry:
    server: ${ACR_SERVER}
    repository: supply-chain

contour:
  infrastructure_provider: azure
  envoy:
    service:
      type: LoadBalancer
      externalTrafficPolicy: Local
      annotations: {}

package_overlays:
- name: cnrs
  secrets:
  - name: cnrs-default-tls
  - name: cnrs-slim

excluded_packages:
- backend.appliveview.tanzu.vmware.com
- connector.appliveview.tanzu.vmware.com
- image-policy-webhook.signing.apps.tanzu.vmware.com
EOF

ℹ️ LoadBalacerに関するannotationscontour.envoy.service.annotationsに指定可能です。

Cloud Native Runtimesで使用するデフォルトのTLS証明書を用意するための次の定義をoverlayで作成します。以下のドキュメントを参考にしました。

cat <<EOF > cnrs-default-tls.yml
#@ load("@ytt:data", "data")
#@ load("@ytt:overlay", "overlay")
#@ namespace = data.values.ingress.external.namespace
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: cnrs-selfsigned-issuer
  namespace: #@ namespace
spec:
  selfSigned: { }
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: cnrs-ca
  namespace: #@ namespace
spec:
  commonName: cnrs-ca
  isCA: true
  issuerRef:
    kind: Issuer
    name: cnrs-selfsigned-issuer
  secretName: cnrs-ca
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: cnrs-ca-issuer
  namespace: #@ namespace
spec:
  ca:
    secretName: cnrs-ca
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: cnrs-default-tls
  namespace: #@ namespace
spec:
  dnsNames:
  - #@ "*.{}".format(data.values.domain_name)
  issuerRef:
    kind: Issuer
    name: cnrs-ca-issuer
  secretName: cnrs-default-tls
---
apiVersion: projectcontour.io/v1
kind: TLSCertificateDelegation
metadata:
  name: contour-delegation
  namespace: #@ namespace
spec:
  delegations:
  - secretName: cnrs-default-tls
    targetNamespaces:
    - "*"
#@overlay/match by=overlay.subset({"metadata":{"name":"config-network"}, "kind": "ConfigMap"})
---
data:
  #@overlay/match missing_ok=True
  default-external-scheme: https
EOF

Cloud Native RuntimesからKnative Serving以外のリソースを削除するoverlayを作成します。

cat <<EOF > cnrs-slim.yml
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"metadata":{"namespace":"knative-eventing"}}), expects="1+"
#@overlay/remove
---
#@overlay/match by=overlay.subset({"metadata":{"namespace":"knative-sources"}}), expects="1+"
#@overlay/remove
---
#@overlay/match by=overlay.subset({"metadata":{"namespace":"triggermesh"}}), expects="1+"
#@overlay/remove
---
#@overlay/match by=overlay.subset({"metadata":{"namespace":"vmware-sources"}}), expects="1+"
#@overlay/remove
---
EOF

overlayファイルをSecretとして作成します。

kubectl -n tap-install create secret generic cnrs-default-tls \
  -o yaml \
  --dry-run=client \
  --from-file=cnrs-default-tls.yml \
  | kubectl apply -f-

kubectl -n tap-install create secret generic cnrs-slim \
  -o yaml \
  --dry-run=client \
  --from-file=cnrs-slim.yml \
  | kubectl apply -f-

TAPをインストールします。

tanzu package install tap -p tap.tanzu.vmware.com -v 1.1.2 --values-file tap-values.yml -n tap-install --poll-timeout 60m

インストールの進捗は次のコマンドで確認します。

watch kubectl get node,app,pod -A -owide

全てのappが Reconcile succeeded になるまで待ちます。

$ kubectl get app -n tap-install 
NAME                       DESCRIPTION           SINCE-DEPLOY   AGE
appliveview-conventions    Reconcile succeeded   5m22s          89m
buildservice               Reconcile succeeded   3m27s          94m
cartographer               Reconcile succeeded   5m             90m
cert-manager               Reconcile succeeded   8m15s          94m
cnrs                       Reconcile succeeded   115s           88m
contour                    Reconcile succeeded   7m34s          90m
conventions-controller     Reconcile succeeded   8m23s          90m
developer-conventions      Reconcile succeeded   7m50s          89m
fluxcd-source-controller   Reconcile succeeded   106s           94m
ootb-delivery-basic        Reconcile succeeded   4m51s          86m
ootb-supply-chain-basic    Reconcile succeeded   4m52s          86m
ootb-templates             Reconcile succeeded   5m2s           86m
service-bindings           Reconcile succeeded   79s            94m
services-toolkit           Reconcile succeeded   91s            94m
source-controller          Reconcile succeeded   2m16s          94m
spring-boot-conventions    Reconcile succeeded   7m46s          89m
tap                        Reconcile succeeded   3m39s          94m
tap-auth                   Reconcile succeeded   2m51s          94m
tap-telemetry              Reconcile succeeded   2m22s          94m
tekton-pipelines           Reconcile succeeded   113s           94m

インストールされたパッケージは次の通りです。

$ kubectl get packageinstall -n tap-install 
NAME                       PACKAGE NAME                                   PACKAGE VERSION   DESCRIPTION           AGE
appliveview-conventions    conventions.appliveview.tanzu.vmware.com       1.1.2             Reconcile succeeded   90m
buildservice               buildservice.tanzu.vmware.com                  1.5.2             Reconcile succeeded   94m
cartographer               cartographer.tanzu.vmware.com                  0.3.0             Reconcile succeeded   91m
cert-manager               cert-manager.tanzu.vmware.com                  1.5.3+tap.2       Reconcile succeeded   94m
cnrs                       cnrs.tanzu.vmware.com                          1.2.1             Reconcile succeeded   89m
contour                    contour.tanzu.vmware.com                       1.18.2+tap.2      Reconcile succeeded   91m
conventions-controller     controller.conventions.apps.tanzu.vmware.com   0.6.3             Reconcile succeeded   91m
developer-conventions      developer-conventions.tanzu.vmware.com         0.6.0             Reconcile succeeded   90m
fluxcd-source-controller   fluxcd.source.controller.tanzu.vmware.com      0.16.4            Reconcile succeeded   94m
ootb-delivery-basic        ootb-delivery-basic.tanzu.vmware.com           0.7.1             Reconcile succeeded   86m
ootb-supply-chain-basic    ootb-supply-chain-basic.tanzu.vmware.com       0.7.1             Reconcile succeeded   86m
ootb-templates             ootb-templates.tanzu.vmware.com                0.7.1             Reconcile succeeded   86m
service-bindings           service-bindings.labs.vmware.com               0.7.1             Reconcile succeeded   94m
services-toolkit           services-toolkit.tanzu.vmware.com              0.6.0             Reconcile succeeded   94m
source-controller          controller.source.apps.tanzu.vmware.com        0.3.3             Reconcile succeeded   94m
spring-boot-conventions    spring-boot-conventions.tanzu.vmware.com       0.4.0             Reconcile succeeded   90m
tap                        tap.tanzu.vmware.com                           1.1.2             Reconcile succeeded   95m
tap-auth                   tap-auth.tanzu.vmware.com                      1.0.1             Reconcile succeeded   94m
tap-telemetry              tap-telemetry.tanzu.vmware.com                 0.1.4             Reconcile succeeded   94m
tekton-pipelines           tekton.tanzu.vmware.com                        0.33.5            Reconcile succeeded   94m

デプロイされたPodは次の通りです。

$ kubectl get pod -A
NAMESPACE                   NAME                                                   READY   STATUS    RESTARTS   AGE     IP            NODE                                NOMINATED NODE   READINESS GATES
app-live-view-conventions   appliveview-webhook-5b5b7b8f57-2tbx5                   1/1     Running   0          91m     10.244.1.5    aks-nodepool1-42094893-vmss000002   <none>           <none>
build-service               build-pod-image-fetcher-c79jd                          5/5     Running   0          95m     10.244.0.26   aks-nodepool1-42094893-vmss000001   <none>           <none>
build-service               build-pod-image-fetcher-x2fzg                          5/5     Running   0          89m     10.244.1.9    aks-nodepool1-42094893-vmss000002   <none>           <none>
build-service               build-pod-image-fetcher-zpqps                          5/5     Running   0          87m     10.244.2.10   aks-nodepool1-42094893-vmss000003   <none>           <none>
build-service               dependency-updater-controller-688bc785fb-9gn47         1/1     Running   0          95m     10.244.0.23   aks-nodepool1-42094893-vmss000001   <none>           <none>
build-service               secret-syncer-controller-86f7bdbf54-h647d              1/1     Running   0          95m     10.244.0.21   aks-nodepool1-42094893-vmss000001   <none>           <none>
build-service               smart-warmer-image-fetcher-hkb6v                       4/4     Running   0          89m     10.244.1.6    aks-nodepool1-42094893-vmss000002   <none>           <none>
build-service               smart-warmer-image-fetcher-kk2sf                       4/4     Running   0          91m     10.244.0.37   aks-nodepool1-42094893-vmss000001   <none>           <none>
build-service               smart-warmer-image-fetcher-n46zq                       4/4     Running   0          87m     10.244.2.9    aks-nodepool1-42094893-vmss000003   <none>           <none>
build-service               warmer-controller-67fccdd45c-74sgm                     1/1     Running   0          95m     10.244.0.19   aks-nodepool1-42094893-vmss000001   <none>           <none>
cartographer-system         cartographer-controller-7744b9b55f-wwvs5               1/1     Running   0          92m     10.244.1.8    aks-nodepool1-42094893-vmss000002   <none>           <none>
cert-injection-webhook      cert-injection-webhook-f9cd54fc-fdtx4                  1/1     Running   0          95m     10.244.0.22   aks-nodepool1-42094893-vmss000001   <none>           <none>
cert-manager                cert-manager-6766fd8484-8gkrm                          1/1     Running   0          95m     10.244.0.28   aks-nodepool1-42094893-vmss000001   <none>           <none>
cert-manager                cert-manager-cainjector-566cd4d8c6-25llg               1/1     Running   0          95m     10.244.0.27   aks-nodepool1-42094893-vmss000001   <none>           <none>
cert-manager                cert-manager-webhook-7c69cc6c79-bgtjz                  1/1     Running   0          95m     10.244.0.29   aks-nodepool1-42094893-vmss000001   <none>           <none>
conventions-system          conventions-controller-manager-77b6b5b4dd-d4kfl        1/1     Running   0          92m     10.244.0.30   aks-nodepool1-42094893-vmss000001   <none>           <none>
developer-conventions       webhook-6bd65c766d-ct7vz                               1/1     Running   0          91m     10.244.0.35   aks-nodepool1-42094893-vmss000001   <none>           <none>
flux-system                 source-controller-6b665b5559-6r79g                     1/1     Running   0          95m     10.244.0.16   aks-nodepool1-42094893-vmss000001   <none>           <none>
kapp-controller             kapp-controller-f578dd744-mklrl                        1/1     Running   0          98m     10.244.0.10   aks-nodepool1-42094893-vmss000001   <none>           <none>
knative-serving             activator-5bd7468ff9-j2md6                             1/1     Running   0          90m     10.244.1.7    aks-nodepool1-42094893-vmss000002   <none>           <none>
knative-serving             activator-5bd7468ff9-wjqhn                             1/1     Running   0          90m     10.244.1.3    aks-nodepool1-42094893-vmss000002   <none>           <none>
knative-serving             activator-5bd7468ff9-xxqsf                             1/1     Running   0          90m     10.244.2.4    aks-nodepool1-42094893-vmss000003   <none>           <none>
knative-serving             autoscaler-66467f49fd-sjdr9                            1/1     Running   0          90m     10.244.0.39   aks-nodepool1-42094893-vmss000001   <none>           <none>
knative-serving             autoscaler-hpa-5b7f9dc5bd-c7v98                        1/1     Running   0          90m     10.244.1.11   aks-nodepool1-42094893-vmss000002   <none>           <none>
knative-serving             controller-69586dc464-zvkt5                            1/1     Running   0          90m     10.244.1.10   aks-nodepool1-42094893-vmss000002   <none>           <none>
knative-serving             domain-mapping-867547646b-79787                        1/1     Running   0          90m     10.244.0.38   aks-nodepool1-42094893-vmss000001   <none>           <none>
knative-serving             domainmapping-webhook-8cf7f7866-w6276                  1/1     Running   0          90m     10.244.2.3    aks-nodepool1-42094893-vmss000003   <none>           <none>
knative-serving             net-certmanager-controller-57cb45d68d-8pq6m            1/1     Running   0          90m     10.244.2.5    aks-nodepool1-42094893-vmss000003   <none>           <none>
knative-serving             net-certmanager-webhook-78f7f9d6f9-v8h8l               1/1     Running   0          90m     10.244.2.6    aks-nodepool1-42094893-vmss000003   <none>           <none>
knative-serving             net-contour-controller-5bbb6b6b5c-zjh7x                1/1     Running   0          90m     10.244.2.7    aks-nodepool1-42094893-vmss000003   <none>           <none>
knative-serving             webhook-c5f4ff649-pcw5s                                1/1     Running   0          90m     10.244.1.4    aks-nodepool1-42094893-vmss000002   <none>           <none>
knative-serving             webhook-c5f4ff649-z975p                                1/1     Running   0          90m     10.244.2.8    aks-nodepool1-42094893-vmss000003   <none>           <none>
kpack                       kpack-controller-9ddb88699-7qjd5                       1/1     Running   0          95m     10.244.0.25   aks-nodepool1-42094893-vmss000001   <none>           <none>
kpack                       kpack-webhook-cb68dcd48-ch6sq                          1/1     Running   0          95m     10.244.0.20   aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 azure-ip-masq-agent-9pg76                              1/1     Running   0          90m     10.224.0.4    aks-nodepool1-42094893-vmss000002   <none>           <none>
kube-system                 azure-ip-masq-agent-dmc28                              1/1     Running   0          88m     10.224.0.6    aks-nodepool1-42094893-vmss000003   <none>           <none>
kube-system                 azure-ip-masq-agent-wdm4c                              1/1     Running   0          3h18m   10.224.0.5    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 cloud-node-manager-4bjmj                               1/1     Running   0          88m     10.224.0.6    aks-nodepool1-42094893-vmss000003   <none>           <none>
kube-system                 cloud-node-manager-52t6f                               1/1     Running   0          3h18m   10.224.0.5    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 cloud-node-manager-w9tpr                               1/1     Running   0          90m     10.224.0.4    aks-nodepool1-42094893-vmss000002   <none>           <none>
kube-system                 coredns-autoscaler-7d56cd888-8gn52                     1/1     Running   0          3h19m   10.244.0.4    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 coredns-dc97c5f55-g95px                                1/1     Running   0          3h17m   10.244.0.7    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 coredns-dc97c5f55-vmssx                                1/1     Running   0          3h19m   10.244.0.6    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 csi-azuredisk-node-pfwth                               3/3     Running   0          3h18m   10.224.0.5    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 csi-azuredisk-node-sfxkc                               3/3     Running   0          88m     10.224.0.6    aks-nodepool1-42094893-vmss000003   <none>           <none>
kube-system                 csi-azuredisk-node-vqx5p                               3/3     Running   0          90m     10.224.0.4    aks-nodepool1-42094893-vmss000002   <none>           <none>
kube-system                 csi-azurefile-node-brgdb                               3/3     Running   0          3h18m   10.224.0.5    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 csi-azurefile-node-d8jdl                               3/3     Running   0          88m     10.224.0.6    aks-nodepool1-42094893-vmss000003   <none>           <none>
kube-system                 csi-azurefile-node-r8k6x                               3/3     Running   0          90m     10.224.0.4    aks-nodepool1-42094893-vmss000002   <none>           <none>
kube-system                 konnectivity-agent-7998654bf-5kmmg                     1/1     Running   0          3h8m    10.244.0.9    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 konnectivity-agent-7998654bf-mw2vj                     1/1     Running   0          3h8m    10.244.0.8    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 kube-proxy-k5nxr                                       1/1     Running   0          90m     10.224.0.4    aks-nodepool1-42094893-vmss000002   <none>           <none>
kube-system                 kube-proxy-r2jvx                                       1/1     Running   0          88m     10.224.0.6    aks-nodepool1-42094893-vmss000003   <none>           <none>
kube-system                 kube-proxy-sxxgl                                       1/1     Running   0          3h18m   10.224.0.5    aks-nodepool1-42094893-vmss000001   <none>           <none>
kube-system                 metrics-server-64b66fbbc8-2rdv9                        1/1     Running   0          3h19m   10.244.0.5    aks-nodepool1-42094893-vmss000001   <none>           <none>
secretgen-controller        secretgen-controller-76d5b44894-f8vrl                  1/1     Running   0          98m     10.244.0.11   aks-nodepool1-42094893-vmss000001   <none>           <none>
service-bindings            manager-76c748b987-9h5fp                               1/1     Running   0          95m     10.244.0.18   aks-nodepool1-42094893-vmss000001   <none>           <none>
services-toolkit            services-toolkit-controller-manager-668cd9c746-8h8lh   1/1     Running   0          95m     10.244.0.17   aks-nodepool1-42094893-vmss000001   <none>           <none>
source-system               source-controller-manager-747bcdd6d8-zzv96             1/1     Running   0          96m     10.244.0.12   aks-nodepool1-42094893-vmss000001   <none>           <none>
spring-boot-convention      spring-boot-webhook-d668f8474-plskg                    1/1     Running   0          91m     10.244.0.36   aks-nodepool1-42094893-vmss000001   <none>           <none>
stacks-operator-system      controller-manager-6cb4d87ffb-hx2wc                    1/1     Running   0          95m     10.244.0.24   aks-nodepool1-42094893-vmss000001   <none>           <none>
tanzu-system-ingress        contour-85dd7d99ff-d447g                               1/1     Running   0          92m     10.244.0.33   aks-nodepool1-42094893-vmss000001   <none>           <none>
tanzu-system-ingress        contour-85dd7d99ff-drq6k                               1/1     Running   0          92m     10.244.0.32   aks-nodepool1-42094893-vmss000001   <none>           <none>
tanzu-system-ingress        envoy-2rs47                                            2/2     Running   0          87m     10.244.2.2    aks-nodepool1-42094893-vmss000003   <none>           <none>
tanzu-system-ingress        envoy-96cvt                                            2/2     Running   0          89m     10.244.1.2    aks-nodepool1-42094893-vmss000002   <none>           <none>
tanzu-system-ingress        envoy-tdnmg                                            2/2     Running   0          92m     10.244.0.31   aks-nodepool1-42094893-vmss000001   <none>           <none>
tap-telemetry               tap-telemetry-controller-f748d7c5f-jmrj6               1/1     Running   0          96m     10.244.0.13   aks-nodepool1-42094893-vmss000001   <none>           <none>
tekton-pipelines            tekton-pipelines-controller-576bbc7dfb-hzhfz           1/1     Running   0          96m     10.244.0.14   aks-nodepool1-42094893-vmss000001   <none>           <none>
tekton-pipelines            tekton-pipelines-webhook-854c75d947-bt2tq              1/1     Running   0          96m     10.244.0.15   aks-nodepool1-42094893-vmss000001   <none>           <none>

Nodeは3台(standard_d2_v5の場合)にオートスケールされています。

$ kubectl get node -owide
NAME                                STATUS   ROLES   AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-nodepool1-42094893-vmss000001   Ready    agent   3h18m   v1.22.6   10.224.0.5    <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.5.11+azure-2
aks-nodepool1-42094893-vmss000002   Ready    agent   90m     v1.22.6   10.224.0.4    <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.5.11+azure-2
aks-nodepool1-42094893-vmss000003   Ready    agent   88m     v1.22.6   10.224.0.6    <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.5.11+azure-2

Envoyに設定されたExternal IPを使って、cnrs.domain_nameを変更します。ドメイン名にはsslip.ioを使用します。 例えば、External IPが10.99.0.147の場合はcnrs.domain_nameに*.10-99-0-147.sslip.ioを指定します。

次のコマンドでtap-values.ymlを更新します。

sed -i.bak "s|tap.example.com|$(kubectl get -n tanzu-system-ingress svc envoy -ojsonpath='{.status.loadBalancer.ingress[0].ip}' | sed 's/\./-/g').sslip.io|g" tap-values.yml

TAPを更新します。

tanzu package installed update tap -n tap-install -v 1.1.2 -f tap-values.yml 

Default TLSのCertificateのDNS名が更新されたことを確認してください。少し時間がかかる場合があります。

$ kubectl get certificate -n tanzu-system-ingress cnrs-default-tls -ojsonpath='{.spec.dnsNames[0]}'
*.20-27-16-181.sslip.io

ADグループの作成とメンバー追加

TAP Developer用のADグループを作成します。

GROUP_ID=$(az ad group create --display-name tap-demo-developer --mail-nickname tap-demo-developer --query id -o tsv)

自分のアカウント(ここではtmaki@pivotalazure.vmware.com)をこのグループに追加します。

az ad group member add --group tap-demo-developer --member-id $(az ad user show --id tmaki@pivotalazure.vmware.com --query id -o tsv)

groupに追加されていることを確認します。

$ az ad group member list --group tap-demo-developer 
[
  {
    "@odata.type": "#microsoft.graph.user",
    "businessPhones": [],
    "displayName": "Toshiaki Maki",
    "givenName": "Toshiaki",
    "id": "********",
    "jobTitle": null,
    "mail": null,
    "mobilePhone": null,
    "officeLocation": null,
    "preferredLanguage": null,
    "surname": "Maki",
    "userPrincipalName": "tmaki@pivotalazure.vmware.com"
  }
]

GROUP_IDは次のコマンドでも取得できます。

GROUP_ID=$(az ad group list --filter "displayname eq 'tap-demo-developer'" --query '[0].id' -o tsv )

Workloadのデプロイ

Workloadを作成するための事前準備

ここは引き続き、adminとして作業します。

az aks get-credentials --resource-group tap-rg --name tap-sandbox --admin --overwrite-existing

https://docs.vmware.com/en/Tanzu-Application-Platform/1.1/tap/GUID-install-components.html#setup (一部変更しています)

demo namespaceを作成します。

kubectl create ns demo

demo namespaceにおいた、先に作成したADグループに対して、app-editor ClusterRoleをバインドします。

kubectl create rolebinding app-editor -n demo --clusterrole app-editor --group ${GROUP_ID}

ACRにアクセスするSecretを作成します。ここではadminアカウント使用しますが、Service Principalの方がいいかもしれません。

tanzu secret registry add registry-credentials --server ${ACR_SERVER} --username ${ACR_USERNAME} --password ${ACR_PASSWORD} --namespace demo

Service Accountの設定を行います。

cat <<EOF | kubectl -n demo apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: tap-registry
  annotations:
    secretgen.carvel.dev/image-pull-secret: ""
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: e30K
---
apiVersion: v1
kind: Secret
metadata:
  name: git-ssh
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: default
secrets:
- name: registry-credentials
- name: git-ssh
imagePullSecrets:
- name: registry-credentials
- name: tap-registry
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: default-permit-deliverable
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: deliverable
subjects:
- kind: ServiceAccount
  name: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: default-permit-workload
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: workload
subjects:
- kind: ServiceAccount
  name: default
EOF

Developerとしてログイン

ここでDeveloperとしてk8sにログインします。

az aks get-credentials --resource-group tap-rg --name tap-sandbox --overwrite-existing

K8sクラスタへ初回アクセスのタイミングでAzureへのログインを求められます。
default namespaceに対するPodのRead権限はないので、kubectl get podはForbiddenになります。

$ kubectl get pod
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AJ9MVQ438 to authenticate.
Error from server (Forbidden): pods is forbidden: User "tmaki@pivotalazure.vmware.com" cannot list resource "pods" in API group "" in the namespace "default"

demo namespaceはPod一覧を取得できます。

$ kubectl get pod -n demo
No resources found in demo namespace.

Node.jsアプリのデプロイ

tanzu apps workload apply hello \
  --app hello \
  --git-repo https://github.com/making/hello-nodejs \
  --git-branch master \
  --type web \
  -n demo \
  -y
tanzu apps workload tail hello -n demo   

作成されるリソースを確認したければ次のコマンドをwatchしてください。

watch kubectl get workload,pod,gitrepo,imgs,build,podintent,taskrun,imagerepository,app,ksvc -n demo -owide
$ kubectl get workload,pod,gitrepo,imgs,build,podintent,taskrun,imagerepository,app,ksvc -n demo       
NAME                       SOURCE                                   SUPPLYCHAIN     READY   REASON   AGE
workload.carto.run/hello   https://github.com/making/hello-nodejs   source-to-url   True    Ready    2m51s

NAME                                          READY   STATUS      RESTARTS   AGE
pod/hello-00001-deployment-6746767684-wg28n   2/2     Running     0          41s
pod/hello-build-1-build-pod                   0/1     Completed   0          2m45s
pod/hello-config-writer-2h56c-pod             0/1     Completed   0          115s

NAME                                           URL                                      READY   STATUS                                                              AGE
gitrepository.source.toolkit.fluxcd.io/hello   https://github.com/making/hello-nodejs   True    Fetched revision: master/19610d1789fb30d571e0b27a65ed03a7bdec2922   2m49s

NAME                   LATESTIMAGE                                                                                                           READY
image.kpack.io/hello   tap23774.azurecr.io/supply-chain/hello-demo@sha256:970139e95f01aea88e22ed329d68c9ef57859259f327baa2e47452194c0b07a5   True

NAME                           IMAGE                                                                                                                 SUCCEEDED
build.kpack.io/hello-build-1   tap23774.azurecr.io/supply-chain/hello-demo@sha256:970139e95f01aea88e22ed329d68c9ef57859259f327baa2e47452194c0b07a5   True

NAME                                                READY   REASON               AGE
podintent.conventions.apps.tanzu.vmware.com/hello   True    ConventionsApplied   2m1s

NAME                                           SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
taskrun.tekton.dev/hello-config-writer-2h56c   True        Succeeded   115s        67s

NAME                                                          IMAGE                                                                                     URL                                                                                                                                                                                              READY   REASON   AGE
imagerepository.source.apps.tanzu.vmware.com/hello-delivery   tap23774.azurecr.io/supply-chain/hello-demo-bundle:52124fcf-3874-476c-91d9-b9fb076a3fa2   http://source-controller-manager-artifact-service.source-system.svc.cluster.local./imagerepository/demo/hello-delivery/c3d31cfe72575bea1caef561336dafcf95e46940d59519c9dabea92e510cbbab.tar.gz   True    Ready    2m46s

NAME                         DESCRIPTION           SINCE-DEPLOY   AGE
app.kappctrl.k14s.io/hello   Reconcile succeeded   43s            2m46s

NAME                                URL                                        LATESTCREATED   LATESTREADY   READY   REASON
service.serving.knative.dev/hello   https://hello-demo.20-27-16-181.sslip.io   hello-00001     hello-00001   True  
$ tanzu apps workload get -n demo hello
# hello: Ready
---
lastTransitionTime: "2022-07-09T10:40:24Z"
message: ""
reason: Ready
status: "True"
type: Ready

Pods
NAME                                      STATUS      RESTARTS   AGE
hello-00001-deployment-6746767684-wg28n   Running     0          60s
hello-build-1-build-pod                   Succeeded   0          3m4s
hello-config-writer-2h56c-pod             Succeeded   0          2m14s

Knative Services
NAME    READY   URL
hello   Ready   https://hello-demo.20-27-16-181.sslip.io
$ curl -k https://hello-demo.20-27-16-181.sslip.io
Hello Tanzu!!

確認が終わればWorkloadを削除します。

tanzu apps workload delete -n demo hello -y

Javaアプリのデプロイ

tanzu apps workload apply spring-music \
  --app spring-music \
  --git-repo https://github.com/scottfrederick/spring-music \
  --git-branch tanzu \
  --type web \
  --annotation autoscaling.knative.dev/minScale=1 \
  -n demo \
  -y
tanzu apps workload tail spring-music -n demo   

作成されるリソースを確認したければ次のコマンドをwatchしてください。

watch kubectl get workload,pod,gitrepo,imgs,build,podintent,taskrun,imagerepository,app,ksvc -n demo -owide
$ kubectl get workload,pod,gitrepo,imgs,build,podintent,taskrun,imagerepository,app,ksvc -n demo -owide 

NAME                              SOURCE                                           SUPPLYCHAIN     READY   REASON   AGE
workload.carto.run/spring-music   https://github.com/scottfrederick/spring-music   source-to-url   True    Ready    4m36s

NAME                                                READY   STATUS      RESTARTS   AGE     IP            NODE                                NOMINATED NODE   READINESS GATES
pod/spring-music-00001-deployment-f546f5498-hcswz   2/2     Running     0          25s     10.244.2.16   aks-nodepool1-42094893-vmss000003   <none>           <none>
pod/spring-music-build-1-build-pod                  0/1     Completed   0          4m30s   10.244.2.14   aks-nodepool1-42094893-vmss000003   <none>           <none>
pod/spring-music-config-writer-m7gjm-pod            0/1     Completed   0          90s     10.244.2.15   aks-nodepool1-42094893-vmss000003   <none>           <none>

NAME                                                  URL                                              READY   STATUS                                                             AGE
gitrepository.source.toolkit.fluxcd.io/spring-music   https://github.com/scottfrederick/spring-music   True    Fetched revision: tanzu/922a509361d1345984899cafeb34622ef7dd2086   4m33s

NAME                          LATESTIMAGE                                                                                                                  READY
image.kpack.io/spring-music   tap23774.azurecr.io/supply-chain/spring-music-demo@sha256:a714b1409ec79b1b13bc46451e120b61372f2ee536e956907d7b01cd1f7cd76e   True

NAME                                  IMAGE                                                                                                                        SUCCEEDED
build.kpack.io/spring-music-build-1   tap23774.azurecr.io/supply-chain/spring-music-demo@sha256:a714b1409ec79b1b13bc46451e120b61372f2ee536e956907d7b01cd1f7cd76e   True

NAME                                                       READY   REASON               AGE
podintent.conventions.apps.tanzu.vmware.com/spring-music   True    ConventionsApplied   95s

NAME                                                  SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
taskrun.tekton.dev/spring-music-config-writer-m7gjm   True        Succeeded   90s         80s

NAME                                                                 IMAGE                                                                                            URL                                                                                                                                                                                                     READY   REASON   AGE
imagerepository.source.apps.tanzu.vmware.com/spring-music-delivery   tap23774.azurecr.io/supply-chain/spring-music-demo-bundle:d1d29896-3ae9-466a-95e2-dc247ca2f48b   http://source-controller-manager-artifact-service.source-system.svc.cluster.local./imagerepository/demo/spring-music-delivery/920d3945f55cbeb15ec2b36fc9df18c1515e5ebf46cfeff91bef41d2ea737bd9.tar.gz   True    Ready    4m31s

NAME                                DESCRIPTION           SINCE-DEPLOY   AGE
app.kappctrl.k14s.io/spring-music   Reconcile succeeded   27s            4m31s

NAME                                       URL                                               LATESTCREATED        LATESTREADY          READY   REASON
service.serving.knative.dev/spring-music   https://spring-music-demo.20-27-16-181.sslip.io   spring-music-00001   spring-music-00001   True  
$ tanzu apps workload get -n demo spring-music
# spring-music: Ready
---
lastTransitionTime: "2022-07-09T10:58:04Z"
message: ""
reason: Ready
status: "True"
type: Ready

Pods
NAME                                            STATUS      RESTARTS   AGE
spring-music-00001-deployment-f546f5498-hcswz   Running     0          54s
spring-music-build-1-build-pod                  Succeeded   0          4m59s
spring-music-config-writer-m7gjm-pod            Succeeded   0          119s

Knative Services
NAME           READY   URL
spring-music   Ready   https://spring-music-demo.20-27-16-181.sslip.io
image

"THIS IS UNSAFE"を入力

image

確認が終わればWorkloadを削除します。

tanzu apps workload delete -n demo spring-music -y

[Optional] Lets EncryptでCertificateを作成

パブリックに公開されたAKSであれば、HTTP01チャレンジを使用して簡単にLet's Encryptによる証明書を生成できます。

以下の作業はadminで行います。

az aks get-credentials --resource-group tap-rg --name tap-sandbox --admin --overwrite-existing

cnrs-default-tls.ymlを次のように修正します。

cat <<EOF > cnrs-default-tls.yml
#@ load("@ytt:data", "data")
#@ load("@ytt:overlay", "overlay")
#@ namespace = data.values.ingress.external.namespace
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: user@yourdomain.com
    privateKeySecretRef:
      name: letsencrypt
    solvers:
    - http01:
        ingress:
          class: contour
#@overlay/match by=overlay.subset({"metadata":{"name":"config-certmanager"}, "kind": "ConfigMap"})
---
data:
  #@overlay/match missing_ok=True
  issuerRef: |
    kind: ClusterIssuer
    name: letsencrypt
#@overlay/match by=overlay.subset({"metadata":{"name":"config-network"}, "kind": "ConfigMap"})
---
data:
  #@overlay/match missing_ok=True
  default-external-scheme: https
  #@overlay/match missing_ok=True
  auto-tls: Enabled
EOF

tap-values.ymlから↓の行を削除してください。

  default_tls_secret: tanzu-system-ingress/cnrs-default-tls

overlayとTAPを更新します。また、knative-serving namespaceのConfigMapは初回設定後は設定ファイルの変更を反映しないように設定されているため、削除して再作成させます。

kubectl -n tap-install create secret generic cnrs-default-tls \
  -o yaml \
  --dry-run=client \
  --from-file=cnrs-default-tls.yml \
  | kubectl apply -f-
kubectl delete cm -n knative-serving config-certmanager
kubectl delete cm -n knative-serving config-network 
tanzu package installed update tap -n tap-install -v 1.1.2 -f tap-values.yml 

しばらくして、以下のようにClusterIssuerが作成されていればOKです。

$ kubectl get clusterissuer
NAME          READY   AGE
letsencrypt   True    18s

先と同じようにSpring Musicをデプロイすれば、今度はTrustedな証明書でアクセスできます。

image

[Optional] ADユーザーの追加

先に使用したユーザー(tmaki@pivotalazure.vmware.com)はAKSのAdmin Roleを持っていました。 実際にTAP DeveloperがAKSのAdmin Roleを持つことはないはずなので、 User Roleを使うパターンを試します。

まずはユーザーを作成します。ここではユーザーを招待します。
image
image

招待をAcceptします。
image

追加したユーザーにAKSのUser Roleをアサインします。

ACCOUNT_ID=$(az ad user show --id tmaki_vmware.com#EXT#@pivotalio.onmicrosoft.com --query id -o tsv)
AKS_ID=$(az aks show -g tap-rg -n tap-sandbox --query id -o tsv)

az role assignment create --assignee ${ACCOUNT_ID} --scope ${AKS_ID} --role "Azure Kubernetes Service Cluster User Role"

また、tap-demo-developer groupに追加します。

az ad group member add --group tap-demo-developer --member-id ${ACCOUNT_ID}

groupに追加されていることを確認します。

$ az ad group member list --group tap-demo-developer 
[
  {
    "@odata.type": "#microsoft.graph.user",
    "businessPhones": [],
    "displayName": "Toshiaki Maki",
    "givenName": "Toshiaki",
    "id": "********",
    "jobTitle": null,
    "mail": null,
    "mobilePhone": null,
    "officeLocation": null,
    "preferredLanguage": null,
    "surname": "Maki",
    "userPrincipalName": "tmaki@pivotalazure.vmware.com"
  },
  {
    "@odata.type": "#microsoft.graph.user",
    "businessPhones": [],
    "displayName": "Toshiaki Maki",
    "givenName": null,
    "id": "********",
    "jobTitle": null,
    "mail": "tmaki@vmware.com",
    "mobilePhone": null,
    "officeLocation": null,
    "preferredLanguage": null,
    "surname": null,
    "userPrincipalName": "tmaki_vmware.com#EXT#@pivotalio.onmicrosoft.com"
  }
]

別の環境で追加されたユーザーにログインしてください。

az login

次のコマンドでkubeconfigを取得してください。--adminをつけるとエラーになります。

az aks get-credentials --resource-group tap-rg --name tap-sandbox --overwrite-existing 

Workloadの作成はこれまでと同じです。

TAPのアンインストール

kubectl delete workload -A --all
tanzu package installed delete tap -n tap-install -y

AKSクラスタの削除

az aks delete \
  --resource-group tap-rg \
  --name tap-sandbox

ACRインスタンスの削除

az acr delete --resource-group tap-rg --name ${ACR_NAME}

リソースグループ削除

az group delete --name tap-rg

✒️️ Edit  ⏰ History  🗑 Delete