You are viewing documentation for Kubeflow 1.1

This is a static snapshot from the time of the Kubeflow 1.1 release.
For up-to-date information, see the latest version.

Deploy using kubectl and kpt

Instructions for using kubectl and kpt to deploy Kubeflow on Google Cloud

This guide describes how to use kubectl and kpt to deploy Kubeflow on Google Cloud.

Before you start

Before installing Kubeflow on the command line:

  1. You must have created a management cluster and installed Config Connector.

    • If you don’t have a management cluster follow the instructions

    • Your management cluster must have a namespace setup to administer the Google Cloud project where Kubeflow will be deployed. Follow the instructions to create one if you haven’t already.

  2. If you’re using Cloud Shell, enable boost mode.

  3. Make sure that your Google Cloud project meets the minimum requirements described in the project setup guide.

  4. Follow the guide setting up OAuth credentials to create OAuth credentials for Cloud Identity-Aware Proxy (Cloud IAP).

Install the required tools

  1. Install gcloud.

  2. Install gcloud components

    gcloud components install kpt anthoscli beta
    gcloud components update
    
  3. Install kubectl.

  4. Install Kustomize v3.2.1.

    Note, Kubeflow is not compatible with later versions of Kustomize. Read this GitHub issue for the latest status.

    To deploy Kustomize v3.2.1 on a Linux machine, run the following commands:

    curl -LO https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv3.2.1/kustomize_kustomize.v3.2.1_linux_amd64
    mv kustomize_kustomize.v3.2.1_linux_amd64 kustomize
    chmod +x ./kustomize
       
    # We need to add the kustomize package to your $PATH env variable
    sudo mv ./kustomize /usr/local/bin/kustomize
    

    Then, to verify the installation, run kustomize version. You should see Version:kustomize/v3.2.1 in the output if you’ve successfully deployed Kustomize.

  5. Install yq

    GO111MODULE=on go get github.com/mikefarah/yq/v3
    
  6. Follow the instructions from Preparing to install Anthos Service Mesh to install istioctl.

    Note, the istioctl downloaded from above instructions is specific to Anthos Service Mesh. It is different from the istioctl you can download on https://istio.io/.

Prepare your environment

  1. Log in. You only need to run this command once:

    gcloud auth login
    

Fetch packages using kpt

  1. Fetch the blueprint

    kpt pkg get https://github.com/kubeflow/gcp-blueprints.git/kubeflow@v1.1.0 ./${KFDIR}
    
    • You can choose any name you would like for the directory ${KFDIR}
  2. Change to the Kubeflow directory

    cd ${KFDIR}
    
  3. Fetch Kubeflow manifests

    make get-pkg
    
  • This generates an error like the one below but you can ignore it;

    kpt pkg get https://github.com/jlewi/manifests.git@blueprints ./upstream
    fetching package / from https://github.com/jlewi/manifests to upstream/manifests
    Error: resources must be annotated with config.kubernetes.io/index to be written to files
    

Configure Kubeflow

There are certain parameters that you must define in order to configure how and where kubeflow is defined. These are described in the table below.

kpt setter Description
mgmt-ctxt This is the name of the KUBECONFIG context for the management cluster; this kubecontext will be used to create CNRM resources for your Kubeflow deployment. The context must set the namespace to the namespace in your CNRM cluster where you are creating CNRM resources for the managed project.
gcloud.core.project The project you want to deploy in
location The zone or region you want to deploy in
gcloud.compute.region The region you are deploying in
gcloud.compute.zone The zone to use for zonal resources; must be in gcloud.compute.region
  • Location can be a zone or a region depending on whether you want a regional cluster

  • The Makefile at ${KFDIR}/kubeflow/Makefile contains a rule set-values with appropriate kpt cfg commands to set the values of the parameters

  • You need to edit the makefile at ${KFDIR}/kubeflow/Makefile to set the parameters to the desired values.

  • You need to configure the kubectl context provided in mgmt-ctxt.

    • Choose the management cluster context

      kubectl config use-context ${mgmt-ctxt}
      
    • Create a namespace in your management cluster for the managed project if you haven’t done so.

      kubectl create namespace ${PROJECT}
      
    • Make the managed project’s namespace default of the context:

      kubectl config set-context --current --namespace ${PROJECT}
      
  • If you haven’t previously created an OAuth client for IAP then follow the directions to setup your consent screen and oauth client.

  • Set environment variables with OAuth Client ID and Secret for IAP

    export CLIENT_ID=<Your CLIENT_ID>
    export CLIENT_SECRET=<Your CLIENT_SECRET>
    
  • Invoke the make rule to set the kpt setters

    make set-values
    

Deploy Kubeflow

To deploy Kubeflow, run the following command:

make apply
  • If resources can’t be created because webhook.cert-manager.io is unavailable wait and then rerun make apply

  • If resources can’t be created with an error message like:

    error: unable to recognize ".build/application/app.k8s.io_v1beta1_application_application-controller-kubeflow.yaml": no matches for kind "Application" in version "app.k8s.io/v1beta1”
    

    This issue occurs when the CRD endpoint isn’t established in the Kubernetes API server when the CRD’s custom object is applied. This issue is expected and can happen multiple times for different kinds of resource. To resolve this issue, try running make apply again.

Check your deployment

Follow these steps to verify the deployment:

  1. When the deployment finishes, check the resources installed in the namespace kubeflow in your new cluster. To do this from the command line, first set your kubectl credentials to point to the new cluster:

    gcloud container clusters get-credentials ${KF_NAME} --zone ${ZONE} --project ${PROJECT}
    

    Then see what’s installed in the kubeflow namespace of your GKE cluster:

    kubectl -n kubeflow get all
    

Access the Kubeflow user interface (UI)

To access the Kubeflow central dashboard, follow these steps:

  1. Use the following command to grant yourself the IAP-secured Web App User role:

    gcloud projects add-iam-policy-binding [PROJECT] --member=user:[EMAIL] --role=roles/iap.httpsResourceAccessor
    

    Note, you need the IAP-secured Web App User role even if you are already an owner or editor of the project. IAP-secured Web App User role is not implied by the Project Owner or Project Editor roles.

  2. Enter the following URI into your browser address bar. It can take 20 minutes for the URI to become available:

    https://<KF_NAME>.endpoints.<project-id>.cloud.goog/
    

    You can run the following command to get the URI for your deployment:

    kubectl -n istio-system get ingress
    NAME            HOSTS                                                      ADDRESS         PORTS   AGE
    envoy-ingress   your-kubeflow-name.endpoints.your-gcp-project.cloud.goog   34.102.232.34   80      5d13h
    

    The following command sets an environment variable named HOST to the URI:

    export HOST=$(kubectl -n istio-system get ingress envoy-ingress -o=jsonpath={.spec.rules[0].host})
    
  3. Follow the instructions on the UI to create a namespace. Refer to this guide on creation of profiles.

Notes:

  • It can take 20 minutes for the URI to become available. Kubeflow needs to provision a signed SSL certificate and register a DNS name.
  • If you own or manage the domain or a subdomain with Cloud DNS then you can configure this process to be much faster. See kubeflow/kubeflow#731.

Update Kubeflow

To update Kubeflow

  1. Edit the Makefile at ${KFDIR}/kubeflow/Makefile and change MANIFESTS_URL to point at the version of Kubeflow manifests you want to use

    • Refer to the kpt docs for more info about supported dependencies
  2. Update the local copies

    make update
    
  3. Redeploy

    make apply
    

To evaluate the changes before deploying them you can run make hydrate and then compare the contents of .build to what is currently deployed.

Understanding the deployment process

This section gives you more details about the kfctl configuration and deployment process, so that you can customize your Kubeflow deployment if necessary.

Application layout

Your Kubeflow application directory ${KFDIR} contains the following files and directories:

  • upstream is a directory containing kustomize packages for deploying Kubeflow

    • This directory contains the upstream configurations on which your deployment is based
  • instance is a directory that defines overlays that are applied to the upstream configurations in order to customize Kubeflow for your use case.

    • gcp_config is a kustomize package defining all the Google Cloud resources needed for Kubeflow using Cloud Config Connector

      • You can edit this kustomize package in order to customize the Google Cloud resources for your purposes
    • kustomize contains kustomize packages for the various Kubernetes applications to be installed on your Kubeflow cluster

  • .build is a directory that will contain the hydrated manifests outputted by the make rules

Source Control

It is recommended that you check in your entire ${KFDIR} into source control.

Checking in .build is recommended so you can easily see differences in manifests before applying them.

Google Cloud service accounts

The kfctl deployment process creates three service accounts in your Google Cloud project. These service accounts follow the principle of least privilege. The service accounts are:

  • ${KF_NAME}-admin is used for some admin tasks like configuring the load balancers. The principle is that this account is needed to deploy Kubeflow but not needed to actually run jobs.
  • ${KF_NAME}-user is intended to be used by training jobs and models to access Google Cloud resources (Cloud Storage, BigQuery, etc.). This account has a much smaller set of privileges compared to admin.
  • ${KF_NAME}-vm is used only for the virtual machine (VM) service account. This account has the minimal permissions needed to send metrics and logs to Stackdriver.

Next steps