On-prem requirements and deployment

Requirements

  • Managed Kubernetes v1.22 or above
    • 4 nodes
    • 8 CPU, 32Gi Ram (for each node)
  • Storage bucket (S3 / GCS / Azure Blob storage) with read/write credentials
  • Network access
    • Superwise container registry
    • Public container registry
    • Authentication (443)
    • SMTP (25, 587, 465) / Slack (443) - Optional
    • Managed monitoring (443, 10516) - Optional
  • SSL Certificate that matches desired hostname

Deployment

Prerequisites

  • Kubernetes cluster with admin permission (be able to run kubectl)
  • Kubectl installed and configured
  • Helm ( >= v3.8.0)
  • Helmfile
  • Superwise deployment kit (will be provided by Superwise)

Setting parameters

Superwise deployment parameters are available at values.yaml file.
The following parameters must be changed to match your desired configuration:

  • superwise.hostName: The hostname that is used to access Superwise (UI/API).
  • gateway.tls.secretName: A secret of type tls that contains the certificate and private key. This certificate will be used when accessing Superwise and should match the hostname that is was defined at superwise.hostName.
  • storage.azure.connectionString: The connection string for the Azure Storage Account.
    Alternatively, storage.azure.connectionStringSecret.name and storage.azure.connectionStringSecret.key can be used to provide the value using a preexisting secret.
  • storage.azure.containerName: The name of a preexisting container in the Azure Storage Account.
  • nodeSelector: Node selector values that will be used for all workloads.
  • tolerations: Tolerations that will be used for all workloads.
  • affinity: Affinity rules that will be used for all workloads.

Installation

Option 1 - Helmfile

  1. Run helmfile sync to install Superwise.
    Helmfile will create the namespaces, install the dependencies and deploy Superwise to the cluster.

Option 2 - Kubectl

Helmfile can be used to template the Kubernetes manifests and write the output to stdout or a file.
The output can be used with kubectl apply -f.

  1. Run Helmfile and pipe the output to kubectl:
    helmfile template --include-crds | kubectl apply -f -
    

Option 3 - ArgoCD

Using directory

The renderd manifests from Helmfile can be stored in a Git repository and referenced in a ArgoCD application.

  1. Run Helmfile and write the output to a directory:

    helmfile template --output-dir manifests --include-crds
    
  2. Push the manifests dir to a Git repo.

  3. Create an application object and reference the Git repo that stores the manifests:

kind: Application
metadata:
  name: superwise
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  source:
    repoURL: https://example.com/superwise.git
    targetRevision: master
    path: manifests
    directory:
      recurse: true
  destination:
    server: https://example.com:443
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Using Helmfile plugin

ArgoCD does not support Helmfile deployments but it has a plugins feature that allow usage of any program that generates a valid Kubernetes manifest as an output.

  1. Upload the Superwise deployment kit to a Git repo.

  2. Add the following configuration to argocd-cm configmap:

    configManagementPlugins: |
      - name: helmfile
        init:
          command: ["helmfile"]
          args: ["repos"]
        generate:
          command: ["helmfile"]
          args: ["template", "--skip-deps", "--include-crds"]
    

    This will register helmfile as a config management plugin for ArgoCD.

  3. Add an init container to argocd-repo-server deployment:

    initContainers:
      - name: download-tools
        image: alpine:3
        command: [sh, -c]
        args:
          - wget -qO /tmp/helmfile.tar.gz https://github.com/helmfile/helmfile/releases/download/v0.145.2/helmfile_0.145.2_linux_amd64.tar.gz &&
            tar -C /tmp -xvf /tmp/helmfile.tar.gz && mv /tmp/helmfile /custom-tools/helmfile && chmod +x /custom-tools/helmfile
        volumeMounts:
          - mountPath: /custom-tools
            name: custom-tools
    

    The init container is used to download Helmfile to the ArgoCD repo server pods.

  4. Add volume and volume mount to argocd-repo-server deployment:

    volumes:
      - name: custom-tools
        emptyDir: {}
    volumeMounts:
      - mountPath: /usr/local/bin/helmfile
        name: custom-tools
        subPath: helmfile
    

    The volume and volume mount is used to install Helmfile on the ArgoCD repo server pods.

  5. Create an application object and reference the Git repo that stores the Superwise deployment kit:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: superwise
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  source:
    repoURL: https://example.com/superwise.git
    targetRevision: master
    path: .
    plugin:
      name: helmfile
  destination:
    server: https://example.com:443
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Post installation

  1. Get the ingress IP using kubectl:

    kubectl get service -n istio-ingress istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
    
  2. Create a DNS record that matches the value of superwise.hostName, point it to the ingress IP and generate an SSL certificate for it.

  3. Create a secret of type tls in the istio-ingress namespace and make sure the name matches the value in gateway.tls.secretName.

  4. Send the hostname to Superwise so it can be registered and enabled for authentication.


Did this page help you?