diff --git a/notebooks/community/model_evaluation/automl_tabular_classification_model_evaluation.ipynb b/notebooks/community/model_evaluation/automl_tabular_classification_model_evaluation.ipynb
new file mode 100644
index 000000000..fb35c558c
--- /dev/null
+++ b/notebooks/community/model_evaluation/automl_tabular_classification_model_evaluation.ipynb
@@ -0,0 +1,1431 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ur8xi4C7S06n"
+ },
+ "outputs": [],
+ "source": [
+ "# Copyright 2022 Google LLC\n",
+ "#\n",
+ "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
+ "# you may not use this file except in compliance with the License.\n",
+ "# You may obtain a copy of the License at\n",
+ "#\n",
+ "# https://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing, software\n",
+ "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
+ "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
+ "# See the License for the specific language governing permissions and\n",
+ "# limitations under the License."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JAPoU8Sm5E6e"
+ },
+ "source": [
+ "# Vertex AI Pipelines: Evaluating BatchPrediction results from AutoML Tabular Classification model\n",
+ "\n",
+ "
\n",
+ "\n",
+ " \n",
+ " \n",
+ " Run in Colab\n",
+ " \n",
+ " | \n",
+ " \n",
+ " \n",
+ " \n",
+ " View on GitHub\n",
+ " \n",
+ " | \n",
+ " \n",
+ " \n",
+ " \n",
+ " Open in Vertex AI Workbench\n",
+ " \n",
+ " | \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tvgnzT1CKxrO"
+ },
+ "source": [
+ "## Overview\n",
+ "\n",
+ "This notebook demonstrates how to use the Vertex AI classification model evaluation component to evaluate an AutoML classification model. Model evaluation helps you determine your model performance based on the evaluation metrics and improve the model if necessary. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "d975e698c9a4"
+ },
+ "source": [
+ "### Objective\n",
+ "\n",
+ "In this tutorial, you train a Vertex AI AutoML Tabular Classification model and learn how to evaluate it through a Vertex AI pipeline job using `google_cloud_pipeline_components`:\n",
+ "\n",
+ "This tutorial uses the following Google Cloud ML services and resources:\n",
+ "\n",
+ "- Vertex AI `Datasets`\n",
+ "- Vertex AI `Training`(AutoML Tabular Classification) \n",
+ "- Vertex AI `Model Registry`\n",
+ "- Vertex AI `Pipelines`\n",
+ "- Vertex AI `Batch Predictions`\n",
+ "\n",
+ "\n",
+ "\n",
+ "The steps performed include:\n",
+ "\n",
+ "- Create a Vertex AI `Dataset`.\n",
+ "- Train a Automl Tabular Classification model on the `Dataset` resource.\n",
+ "- Import the trained `AutoML model resource` into the pipeline.\n",
+ "- Run a `Batch Prediction` job.\n",
+ "- Evaulate the AutoML model using the `Classification Evaluation Component`.\n",
+ "- Import the Classification Metrics to the AutoML model resource."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "08d289fa873f"
+ },
+ "source": [
+ "### Dataset\n",
+ "\n",
+ "The dataset being used in this notebook is a part of the PetFinder Dataset, available [here](https://www.kaggle.com/c/petfinder-adoption-prediction) on Kaggle. The current dataset is only a part of the original dataset considered for the problem of predicting whether the pet is adopted or not. It consists of the following fields:\n",
+ "\n",
+ "- `Type`: Type of animal (1 = Dog, 2 = Cat)\n",
+ "- `Age`: Age of pet when listed, in months\n",
+ "- `Breed1`: Primary breed of pet\n",
+ "- `Gender`: Gender of pet\n",
+ "- `Color1`: Color 1 of pet \n",
+ "- `Color2`: Color 2 of pet\n",
+ "- `MaturitySize`: Size at maturity (1 = Small, 2 = Medium, 3 = Large, 4 = Extra Large, 0 = Not Specified)\n",
+ "- `FurLength`: Fur length (1 = Short, 2 = Medium, 3 = Long, 0 = Not Specified)\n",
+ "- `Vaccinated`: Pet has been vaccinated (1 = Yes, 2 = No, 3 = Not Sure)\n",
+ "- `Sterilized`: Pet has been spayed / neutered (1 = Yes, 2 = No, 3 = Not Sure)\n",
+ "- `Health`: Health Condition (1 = Healthy, 2 = Minor Injury, 3 = Serious Injury, 0 = Not Specified)\n",
+ "- `Fee`: Adoption fee (0 = Free)\n",
+ "- `PhotoAmt`: Total uploaded photos for this pet\n",
+ "- `Adopted`: Whether or not the pet was adopted (Yes/No).\n",
+ "\n",
+ "**Note**: This dataset is moved to a public Cloud Storage bucket from where it is accessed in this notebook."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "aed92deeb4a0"
+ },
+ "source": [
+ "### Costs \n",
+ "This tutorial uses billable components of Google Cloud:\n",
+ "\n",
+ "* Vertex AI\n",
+ "* Cloud Storage\n",
+ "\n",
+ "Learn about [Vertex AI\n",
+ "pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n",
+ "pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
+ "Calculator](https://cloud.google.com/products/calculator/)\n",
+ "to generate a cost estimate based on your projected usage."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ze4-nDLfK4pw"
+ },
+ "source": [
+ "### Set up your local development environment\n",
+ "\n",
+ "**If you are using Colab or Vertex AI Workbench Notebooks**, your environment already meets\n",
+ "all the requirements to run this notebook."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gCuSR8GkAgzl"
+ },
+ "source": [
+ "**Otherwise**, make sure your environment meets this notebook's requirements.\n",
+ "You need the following:\n",
+ "\n",
+ "* The Google Cloud SDK\n",
+ "* Git\n",
+ "* Python 3\n",
+ "* virtualenv\n",
+ "* Jupyter notebook running in a virtual environment with Python 3\n",
+ "\n",
+ "The Google Cloud guide to [Setting up a Python development\n",
+ "environment](https://cloud.google.com/python/setup) and the [Jupyter\n",
+ "installation guide](https://jupyter.org/install) provide detailed instructions\n",
+ "for meeting these requirements. The following steps provide a condensed set of\n",
+ "instructions:\n",
+ "\n",
+ "1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)\n",
+ "\n",
+ "1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)\n",
+ "\n",
+ "1. [Install\n",
+ " virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)\n",
+ " and create a virtual environment that uses Python 3. Activate the virtual environment.\n",
+ "\n",
+ "1. To install Jupyter, run `pip3 install jupyter` on the\n",
+ "command-line in a terminal shell.\n",
+ "\n",
+ "1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.\n",
+ "\n",
+ "1. Open this notebook in the Jupyter Notebook Dashboard."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "i7EUnXsZhAGF"
+ },
+ "source": [
+ "## Installation\n",
+ "\n",
+ "Install the following packages required to execute this notebook. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2b4ef9b72d43"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "# The Vertex AI Workbench Notebook product has specific requirements\n",
+ "IS_WORKBENCH_NOTEBOOK = os.getenv(\"DL_ANACONDA_HOME\")\n",
+ "IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(\n",
+ " \"/opt/deeplearning/metadata/env_version\"\n",
+ ")\n",
+ "\n",
+ "# Vertex AI Notebook requires dependencies to be installed with '--user'\n",
+ "USER_FLAG = \"\"\n",
+ "if IS_WORKBENCH_NOTEBOOK:\n",
+ " USER_FLAG = \"--user\"\n",
+ "\n",
+ "! pip3 install --upgrade google-cloud-aiplatform {USER_FLAG} -q\n",
+ "! pip3 install google-cloud-pipeline-components==1.0.17 {USER_FLAG} -q\n",
+ "! pip3 install --upgrade kfp google-cloud-pipeline-components {USER_FLAG} -q\n",
+ "! pip3 install --upgrade matplotlib {USER_FLAG} -q"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "hhq5zEbGg0XX"
+ },
+ "source": [
+ "### Restart the kernel\n",
+ "\n",
+ "After you install the additional packages, you need to restart the notebook kernel so it can find the packages."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "EzrelQZ22IZj"
+ },
+ "outputs": [],
+ "source": [
+ "# Automatically restart kernel after installs\n",
+ "import os\n",
+ "\n",
+ "if not os.getenv(\"IS_TESTING\"):\n",
+ " # Automatically restart kernel after installs\n",
+ " import IPython\n",
+ "\n",
+ " app = IPython.Application.instance()\n",
+ " app.kernel.do_shutdown(True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "lWEdiXsJg0XY"
+ },
+ "source": [
+ "## Before you begin"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BF1j6f9HApxa"
+ },
+ "source": [
+ "### Set up your Google Cloud project\n",
+ "\n",
+ "**The following steps are required, regardless of your notebook environment.**\n",
+ "\n",
+ "1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n",
+ "\n",
+ "1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
+ "\n",
+ "1. [Enable the Vertex AI, Compute Engine, and Dataflow APIs](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component,dataflow.googleapis.com).\n",
+ "\n",
+ "1. If you are running this notebook locally, you need to install the [Cloud SDK](https://cloud.google.com/sdk).\n",
+ "\n",
+ "1. Enter your project ID in the cell below. Then run the cell to make sure the\n",
+ "Cloud SDK uses the right project for all the commands in this notebook.\n",
+ "\n",
+ "**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "WReHDGG5g0XY"
+ },
+ "source": [
+ "#### Set your project ID\n",
+ "\n",
+ "**If you don't know your project ID**, you may be able to get your project ID using `gcloud`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "oM1iC_MfAts1"
+ },
+ "outputs": [],
+ "source": [
+ "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "riG_qUokg0XZ"
+ },
+ "outputs": [],
+ "source": [
+ "if PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n",
+ " # Get your GCP project id from gcloud\n",
+ " shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n",
+ " PROJECT_ID = shell_output[0]\n",
+ " print(\"Project ID:\", PROJECT_ID)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "set_gcloud_project_id"
+ },
+ "outputs": [],
+ "source": [
+ "! gcloud config set project $PROJECT_ID"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "region"
+ },
+ "source": [
+ "#### Region\n",
+ "\n",
+ "You can also change the `REGION` variable, which is used for operations\n",
+ "throughout the rest of this notebook. Below are regions supported for Vertex AI. It is recommended that you choose the region closest to you.\n",
+ "\n",
+ "- Americas: `us-central1`\n",
+ "- Europe: `europe-west4`\n",
+ "- Asia Pacific: `asia-east1`\n",
+ "\n",
+ "You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\n",
+ "\n",
+ "Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "sduDOFQVF6kv"
+ },
+ "outputs": [],
+ "source": [
+ "REGION = \"[your-region]\" # @param {type: \"string\"}\n",
+ "\n",
+ "if REGION == \"[your-region]\":\n",
+ " REGION = \"us-central1\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "06571eb4063b"
+ },
+ "source": [
+ "#### UUID\n",
+ "\n",
+ "If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a uuid for each instance session, and append it onto the name of resources you create in this tutorial."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "697568e92bd6"
+ },
+ "outputs": [],
+ "source": [
+ "import random\n",
+ "import string\n",
+ "\n",
+ "\n",
+ "# Generate a uuid of a specifed length(default=8)\n",
+ "def generate_uuid(length: int = 8) -> str:\n",
+ " return \"\".join(random.choices(string.ascii_lowercase + string.digits, k=length))\n",
+ "\n",
+ "\n",
+ "UUID = generate_uuid()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dr--iN2kAylZ"
+ },
+ "source": [
+ "### Authenticate your Google Cloud account\n",
+ "\n",
+ "**If you are using Vertex AI Workbench Notebooks**, your environment is already\n",
+ "authenticated. Skip this step."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "sBCra4QMA2wR"
+ },
+ "source": [
+ "**If you are using Colab**, run the cell below and follow the instructions\n",
+ "when prompted to authenticate your account via oAuth.\n",
+ "\n",
+ "**Otherwise**, follow these steps:\n",
+ "\n",
+ "1. In the Cloud Console, go to the [**Create service account key**\n",
+ " page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).\n",
+ "\n",
+ "2. Click **Create service account**.\n",
+ "\n",
+ "3. In the **Service account name** field, enter a name, and\n",
+ " click **Create**.\n",
+ "\n",
+ "4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type \"Vertex AI\"\n",
+ "into the filter box, and select\n",
+ " **Vertex AI Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n",
+ "\n",
+ "5. Click **Create**. A JSON file that contains your key downloads to your\n",
+ "local environment.\n",
+ "\n",
+ "6. Enter the path to your service account key as the\n",
+ "`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "PyQmSRbKA8r-"
+ },
+ "outputs": [],
+ "source": [
+ "# If you are running this notebook in Colab, run this cell and follow the\n",
+ "# instructions to authenticate your GCP account. This provides access to your\n",
+ "# Cloud Storage bucket and lets you submit training jobs and prediction\n",
+ "# requests.\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "\n",
+ "# If on Vertex AI Workbench, then don't execute this code\n",
+ "IS_COLAB = \"google.colab\" in sys.modules\n",
+ "if not os.path.exists(\"/opt/deeplearning/metadata/env_version\") and not os.getenv(\n",
+ " \"DL_ANACONDA_HOME\"\n",
+ "):\n",
+ " if \"google.colab\" in sys.modules:\n",
+ " from google.colab import auth as google_auth\n",
+ "\n",
+ " google_auth.authenticate_user()\n",
+ "\n",
+ " # If you are running this notebook locally, replace the string below with the\n",
+ " # path to your service account key and run this cell to authenticate your GCP\n",
+ " # account.\n",
+ " elif not os.getenv(\"IS_TESTING\"):\n",
+ " %env GOOGLE_APPLICATION_CREDENTIALS ''"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zgPO1eR3CYjk"
+ },
+ "source": [
+ "### Create a Cloud Storage bucket\n",
+ "\n",
+ "**The following steps are required, regardless of your notebook environment.**\n",
+ "\n",
+ "When you run a Vertex AI pipeline job using the Cloud SDK, your job stores the pipeline artifacts to a Cloud Storage bucket. In this tutorial, you create a Vertex AI Pipeline job that saves the artifacts like evaluation metrics and feature attributes to a Cloud Storage bucket.\n",
+ "\n",
+ "Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "MzGDU7TWdts_"
+ },
+ "outputs": [],
+ "source": [
+ "BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n",
+ "BUCKET_URI = f\"gs://{BUCKET_NAME}\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "cf221059d072"
+ },
+ "outputs": [],
+ "source": [
+ "if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n",
+ " BUCKET_NAME = PROJECT_ID + \"aip-\" + UUID\n",
+ " BUCKET_URI = f\"gs://{BUCKET_NAME}\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-EcIXiGsCePi"
+ },
+ "source": [
+ "**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "NIq7R4HZCfIc"
+ },
+ "outputs": [],
+ "source": [
+ "! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ucvCsknMCims"
+ },
+ "source": [
+ "Finally, validate access to your Cloud Storage bucket by examining its contents:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "vhOb7YnwClBb"
+ },
+ "outputs": [],
+ "source": [
+ "! gsutil ls -al $BUCKET_URI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "set_service_account"
+ },
+ "source": [
+ "#### Service Account\n",
+ "\n",
+ "You use a service account to create Vertex AI Pipeline jobs. If you do not want to use your project's Compute Engine service account, set `SERVICE_ACCOUNT` to another service account ID."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "UwC1AdGeF6kx"
+ },
+ "outputs": [],
+ "source": [
+ "SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "autoset_service_account"
+ },
+ "outputs": [],
+ "source": [
+ "if (\n",
+ " SERVICE_ACCOUNT == \"\"\n",
+ " or SERVICE_ACCOUNT is None\n",
+ " or SERVICE_ACCOUNT == \"[your-service-account]\"\n",
+ "):\n",
+ " # Get your service account from gcloud\n",
+ " if not IS_COLAB:\n",
+ " shell_output = !gcloud auth list 2>/dev/null\n",
+ " SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n",
+ "\n",
+ " else: # IS_COLAB:\n",
+ " shell_output = ! gcloud projects describe $PROJECT_ID\n",
+ " project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n",
+ " SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n",
+ "\n",
+ " print(\"Service Account:\", SERVICE_ACCOUNT)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "set_service_account:pipelines"
+ },
+ "source": [
+ "#### Set service account access for Vertex AI Pipelines\n",
+ "\n",
+ "Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step. You only need to run this step once per service account."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "6OqzKqhMF6kx"
+ },
+ "outputs": [],
+ "source": [
+ "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n",
+ "\n",
+ "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XoEqT2Y4DJmf"
+ },
+ "source": [
+ "### Import libraries\n",
+ "\n",
+ "Import the Vertex AI Python SDK and other required Python libraries."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "pRUOFELefqf1"
+ },
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "\n",
+ "import google.cloud.aiplatform as aiplatform\n",
+ "import kfp\n",
+ "import matplotlib.pyplot as plt\n",
+ "from google.cloud import aiplatform_v1\n",
+ "from kfp.v2 import compiler"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "init_aip:mbsdk,all"
+ },
+ "source": [
+ "### Initialize Vertex AI SDK for Python\n",
+ "\n",
+ "Initialize the Vertex AI SDK for Python for your project and corresponding bucket."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ksAefQcCF6ky"
+ },
+ "outputs": [],
+ "source": [
+ "aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8d97acf78771"
+ },
+ "source": [
+ "## Create Vertex AI Dataset\n",
+ "\n",
+ "Create a managed tabular dataset resource in Vertex AI using the dataset source."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "3390c9e9426c"
+ },
+ "outputs": [],
+ "source": [
+ "DATA_SOURCE = \"gs://cloud-samples-data/ai-platform-unified/datasets/tabular/petfinder-tabular-classification.csv\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2011a473ce65"
+ },
+ "outputs": [],
+ "source": [
+ "# Create the Vertex AI Dataset resource\n",
+ "dataset = aiplatform.TabularDataset.create(\n",
+ " display_name=\"petfinder-tabular-dataset_\" + UUID,\n",
+ " gcs_source=DATA_SOURCE,\n",
+ ")\n",
+ "\n",
+ "print(\"Resource name:\", dataset.resource_name)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6da01c2f1d4f"
+ },
+ "source": [
+ "## Train AutoML model\n",
+ "\n",
+ "Train a simple classification model the created dataset using `Adopted` as the target column. \n",
+ "\n",
+ "Set a display name and create the training job using `AutoMLTabularTrainingJob` with appropriate data types specified for column transformations."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "5dd3db2d1225"
+ },
+ "outputs": [],
+ "source": [
+ "TRAINING_JOB_DISPLAY_NAME = \"[your-train-job-display-name]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "0614e3fb19da"
+ },
+ "outputs": [],
+ "source": [
+ "# If no display name is specified, use the default one\n",
+ "if (\n",
+ " TRAINING_JOB_DISPLAY_NAME == \"\"\n",
+ " or TRAINING_JOB_DISPLAY_NAME is None\n",
+ " or TRAINING_JOB_DISPLAY_NAME == \"[your-train-job-display-name]\"\n",
+ "):\n",
+ " TRAINING_JOB_DISPLAY_NAME = \"train-petfinder-automl_\" + UUID"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ce9c9f279674"
+ },
+ "source": [
+ "`AutoMLTabularTrainingJob` class creates an AutoML training job using the following parameters: \n",
+ "\n",
+ "- `display_name`: The human readable name for the Vertex AI TrainingJob resource.\n",
+ "- `optimization_prediction_type`: The type of prediction the Model is to produce. Ex: regression, classification.\n",
+ "- `column_specs`(Optional): Transformations to apply to the input columns (including data-type corrections).\n",
+ "- `optimization_objective`: The optimization objective to minimize or maximize. Depending on the type of prediction, this parameter is chosen. If the field is not set, the default objective function is used. \n",
+ "\n",
+ "For more details, please go through this [documentation for **AutoMLTabularTrainingJob** Class](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.AutoMLTabularTrainingJob)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "d33629c2aae6"
+ },
+ "outputs": [],
+ "source": [
+ "# Define the AutoML training job\n",
+ "train_job = aiplatform.AutoMLTabularTrainingJob(\n",
+ " display_name=TRAINING_JOB_DISPLAY_NAME,\n",
+ " optimization_prediction_type=\"classification\",\n",
+ " column_specs={\n",
+ " \"Type\": \"categorical\",\n",
+ " \"Age\": \"numeric\",\n",
+ " \"Breed1\": \"categorical\",\n",
+ " \"Color1\": \"categorical\",\n",
+ " \"Color2\": \"categorical\",\n",
+ " \"MaturitySize\": \"categorical\",\n",
+ " \"FurLength\": \"categorical\",\n",
+ " \"Vaccinated\": \"categorical\",\n",
+ " \"Sterilized\": \"categorical\",\n",
+ " \"Health\": \"categorical\",\n",
+ " \"Fee\": \"numeric\",\n",
+ " \"PhotoAmt\": \"numeric\",\n",
+ " },\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "391c51c98647"
+ },
+ "source": [
+ "Set the display name for the model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "454f077b984e"
+ },
+ "outputs": [],
+ "source": [
+ "MODEL_DISPLAY_NAME = \"[your-model-display-name]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "21b5a27e8171"
+ },
+ "outputs": [],
+ "source": [
+ "# If no name is specified, use the default name\n",
+ "if (\n",
+ " MODEL_DISPLAY_NAME == \"\"\n",
+ " or MODEL_DISPLAY_NAME is None\n",
+ " or MODEL_DISPLAY_NAME == \"[your-model-display-name]\"\n",
+ "):\n",
+ " MODEL_DISPLAY_NAME = \"pet-adoption-prediction-model_\" + UUID"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "93ebafd3f347"
+ },
+ "source": [
+ "Run training job on the created TabularDataset by passing the following arguments for training:\n",
+ "\n",
+ "- `dataset`: The TabularDataset within the same Project from which data needs to be used to train the Model.\n",
+ "- `target_column`: The name of the column values of which the Model is to predict.\n",
+ "- `model_display_name`: The display name of the Vertex AI Model that is produced as an output. \n",
+ "- `budget_milli_node_hours`(Optional): The training budget of creating this Model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The training cost of the model does not exceed this budget.\n",
+ "\n",
+ "For more details on the other parameters used in the `run`() method, please visit this [documentation for **AutoMLTabularTrainingJob** Class](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.AutoMLTabularTrainingJob#google_cloud_aiplatform_AutoMLTabularTrainingJob_run).\n",
+ "\n",
+ "The training job takes roughly 1.5-2 hours to finish."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "9ce44a2ab942"
+ },
+ "outputs": [],
+ "source": [
+ "# Specify the target column\n",
+ "target_column = \"Adopted\"\n",
+ "\n",
+ "# Run the training job\n",
+ "model = train_job.run(\n",
+ " dataset=dataset,\n",
+ " target_column=target_column,\n",
+ " model_display_name=MODEL_DISPLAY_NAME,\n",
+ " budget_milli_node_hours=1000,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bfa52eb3f22f"
+ },
+ "source": [
+ "## List model evaluations from training\n",
+ "\n",
+ "After the training job is finished, get the model evaluations and print them."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "d56e2b3cf57d"
+ },
+ "outputs": [],
+ "source": [
+ "# Get evaluations\n",
+ "model_evaluations = model.list_model_evaluations()\n",
+ "\n",
+ "model_evaluation = list(model_evaluations)[0]\n",
+ "print(model_evaluation)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "bd2e1da7a64e"
+ },
+ "outputs": [],
+ "source": [
+ "# Print the evaluation metrics\n",
+ "for evaluation in model_evaluations:\n",
+ " evaluation = evaluation.to_dict()\n",
+ " print(\"Model's evaluation metrics from Training:\\n\")\n",
+ " metrics = evaluation[\"metrics\"]\n",
+ " for metric in metrics.keys():\n",
+ " print(f\"metric: {metric}, value: {metrics[metric]}\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "19c434d8b035"
+ },
+ "source": [
+ "## Create Pipeline for evaluations\n",
+ "\n",
+ "Now, you run a Vertex AI BatchPrediction job and generate evaluations and feature attributions on its results. \n",
+ "\n",
+ "To do so, you create a Vertex AI pipeline using the components available from the [`google-cloud-pipeline-components`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/index.html) Python package.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ab9f273691cc"
+ },
+ "source": [
+ "### Define the Pipeline\n",
+ "\n",
+ "While defining the flow of the pipeline, you get the model resource first. Then, you sample the provided source dataset for batch predictions and create a batch prediction. The explanations are enabled while creating the batch prediction job to generate feature attributions. Once the batch prediction job is completed, you get the classification evaluation metrics and the feature attributions from the results.\n",
+ "\n",
+ "The pipeline uses the following components:\n",
+ "\n",
+ "- `GetVertexModelOp`: Gets a Vertex Model Artifact. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.GetVertexModelOp).\n",
+ "- `EvaluationDataSamplerOp`: Randomly downsamples an input dataset to a specified size for computing Vertex XAI feature attributions for AutoML Tables and custom models. Creates a Dataflow job with Apache Beam to downsample the dataset. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.EvaluationDataSamplerOp).\n",
+ "- `ModelBatchPredictOp`: Creates a Google Cloud Vertex BatchPredictionJob and waits for it to complete. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.aiplatform.html#google_cloud_pipeline_components.aiplatform.ModelBatchPredictOp).\n",
+ "- `ModelEvaluationClassificationOp`: Compute evaluation metrics on a trained model’s batch prediction results. Creates a Dataflow job with Apache Beam and TFMA to compute evaluation metrics. Supports mutliclass classification evaluation for tabular, image, video, and text data. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.ModelEvaluationClassificationOp).\n",
+ "- `ModelEvaluationFeatureAttributionOp`: Compute feature attribution on a trained model’s batch explanation results. Creates a Dataflow job with Apache Beam and TFMA to compute feature attributions. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.ModelEvaluationFeatureAttributionOp).\n",
+ "- `ModelImportEvaluationOp`: Imports a model evaluation artifact to an existing Vertex model with ModelService.ImportModelEvaluation. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.ModelImportEvaluationOp)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "327d8d4e11b2"
+ },
+ "outputs": [],
+ "source": [
+ "@kfp.dsl.pipeline(\n",
+ " name=\"vertex-evaluation-automl-tabular-classification-feature-attribution\"\n",
+ ")\n",
+ "def evaluation_automl_tabular_feature_attribution_pipeline(\n",
+ " project: str,\n",
+ " location: str,\n",
+ " root_dir: str,\n",
+ " model_name: str,\n",
+ " target_column_name: str,\n",
+ " batch_predict_gcs_source_uris: list,\n",
+ " batch_predict_instances_format: str,\n",
+ " batch_predict_predictions_format: str = \"jsonl\",\n",
+ " batch_predict_machine_type: str = \"n1-standard-4\",\n",
+ " batch_predict_explanation_metadata: dict = {},\n",
+ " batch_predict_explanation_parameters: dict = {},\n",
+ " batch_predict_explanation_data_sample_size: int = 10000,\n",
+ "):\n",
+ "\n",
+ " from google_cloud_pipeline_components.aiplatform import ModelBatchPredictOp\n",
+ " from google_cloud_pipeline_components.experimental.evaluation import (\n",
+ " EvaluationDataSamplerOp, GetVertexModelOp,\n",
+ " ModelEvaluationClassificationOp, ModelEvaluationFeatureAttributionOp,\n",
+ " ModelImportEvaluationOp)\n",
+ "\n",
+ " # Get the Vertex AI model resource\n",
+ " get_model_task = GetVertexModelOp(model_resource_name=model_name)\n",
+ "\n",
+ " # Run Data-sampling task\n",
+ " data_sampler_task = EvaluationDataSamplerOp(\n",
+ " project=project,\n",
+ " location=location,\n",
+ " root_dir=root_dir,\n",
+ " gcs_source_uris=batch_predict_gcs_source_uris,\n",
+ " instances_format=batch_predict_instances_format,\n",
+ " sample_size=batch_predict_explanation_data_sample_size,\n",
+ " )\n",
+ "\n",
+ " # Run Batch Explanations\n",
+ " batch_explain_task = ModelBatchPredictOp(\n",
+ " project=project,\n",
+ " location=location,\n",
+ " model=get_model_task.outputs[\"model\"],\n",
+ " job_display_name=\"model-registry-batch-predict-evaluation\",\n",
+ " gcs_source_uris=data_sampler_task.outputs[\"gcs_output_directory\"],\n",
+ " instances_format=batch_predict_instances_format,\n",
+ " predictions_format=batch_predict_predictions_format,\n",
+ " gcs_destination_output_uri_prefix=root_dir,\n",
+ " machine_type=batch_predict_machine_type,\n",
+ " # Set the explanation parameters\n",
+ " generate_explanation=True,\n",
+ " explanation_parameters=batch_predict_explanation_parameters,\n",
+ " explanation_metadata=batch_predict_explanation_metadata,\n",
+ " )\n",
+ "\n",
+ " # Run evaluation based on prediction type and feature attribution component.\n",
+ " # After, import the model evaluations to the Vertex model.\n",
+ " eval_task = ModelEvaluationClassificationOp(\n",
+ " project=project,\n",
+ " location=location,\n",
+ " root_dir=root_dir,\n",
+ " problem_type=\"classification\",\n",
+ " ground_truth_column=target_column_name,\n",
+ " predictions_gcs_source=batch_explain_task.outputs[\"gcs_output_directory\"],\n",
+ " predictions_format=batch_predict_predictions_format,\n",
+ " )\n",
+ "\n",
+ " # Get Feature Attributions\n",
+ " feature_attribution_task = ModelEvaluationFeatureAttributionOp(\n",
+ " project=project,\n",
+ " location=location,\n",
+ " root_dir=root_dir,\n",
+ " predictions_format=batch_predict_predictions_format,\n",
+ " predictions_gcs_source=batch_explain_task.outputs[\"gcs_output_directory\"],\n",
+ " )\n",
+ "\n",
+ " ModelImportEvaluationOp(\n",
+ " classification_metrics=eval_task.outputs[\"evaluation_metrics\"],\n",
+ " feature_attributions=feature_attribution_task.outputs[\"feature_attributions\"],\n",
+ " model=get_model_task.outputs[\"model\"],\n",
+ " dataset_type=batch_predict_instances_format,\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1abb012ce04b"
+ },
+ "source": [
+ "### Compile the pipeline\n",
+ "\n",
+ "Next, compile the pipline to the `tabular_classification_pipline.json` file."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "e526b588cae9"
+ },
+ "outputs": [],
+ "source": [
+ "compiler.Compiler().compile(\n",
+ " pipeline_func=evaluation_automl_tabular_feature_attribution_pipeline,\n",
+ " package_path=\"tabular_classification_pipeline.json\",\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "26eef4b83c88"
+ },
+ "source": [
+ "### Define the parameters to run the pipeline\n",
+ "\n",
+ "Specify the required parameters to run the pipeline.\n",
+ "\n",
+ "Set a display name for your pipeline."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "63b84f5490d2"
+ },
+ "outputs": [],
+ "source": [
+ "PIPELINE_DISPLAY_NAME = \"[your-pipeline-display-name]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "e0a18b803bb7"
+ },
+ "outputs": [],
+ "source": [
+ "# If no display name is set, use the default one\n",
+ "if (\n",
+ " PIPELINE_DISPLAY_NAME == \"[your-pipeline-display-name]\"\n",
+ " or PIPELINE_DISPLAY_NAME == \"\"\n",
+ " or PIPELINE_DISPLAY_NAME is None\n",
+ "):\n",
+ " PIPELINE_DISPLAY_NAME = \"pet_adoption_\" + UUID"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "a9571ef567de"
+ },
+ "source": [
+ "To pass the required arguments to the pipeline, you define the following paramters below:\n",
+ "\n",
+ "- `project`: Project ID.\n",
+ "- `location`: Region where the pipeline is run.\n",
+ "- `root_dir`: The GCS directory for keeping staging files and artifacts. A random subdirectory is created under the directory to keep job info for resuming the job in case of failure.\n",
+ "- `model_name`: Resource name of the trained AutoML Tabular Classification model.\n",
+ "- `target_column_name`: Name of the column to be used as the target for classification.\n",
+ "- `batch_predict_gcs_source_uris`: List of the Cloud Storage bucket uris of input instances for batch prediction.\n",
+ "- `batch_predict_instances_format`: Format of the input instances for batch prediction. Can be '**jsonl**' or '**bigquery**' or '**csv**'.\n",
+ "- `batch_predict_explanation_data_sample_size`: Size of the samples to be considered for batch prediction and evaluation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "52d622c274d2"
+ },
+ "outputs": [],
+ "source": [
+ "PIPELINE_ROOT = f\"{BUCKET_URI}/pipeline_root/pet_adoption_{UUID}\"\n",
+ "parameters = {\n",
+ " \"project\": PROJECT_ID,\n",
+ " \"location\": REGION,\n",
+ " \"root_dir\": PIPELINE_ROOT,\n",
+ " \"model_name\": model.resource_name,\n",
+ " \"target_column_name\": \"Adopted\",\n",
+ " \"batch_predict_gcs_source_uris\": [DATA_SOURCE],\n",
+ " \"batch_predict_instances_format\": \"csv\",\n",
+ " \"batch_predict_explanation_data_sample_size\": 3000,\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "0409b0f330c2"
+ },
+ "source": [
+ "Create a Vertex AI pipeline job using the following parameters:\n",
+ "\n",
+ "- `display_name`: The name of the pipeline, this will show up in the Google Cloud console.\n",
+ "- `template_path`: The path of PipelineJob or PipelineSpec JSON or YAML file. It can be a local path, a Google Cloud Storage URI or an Artifact Registry URI.\n",
+ "- `parameter_values`: The mapping from runtime parameter names to its values that\n",
+ " control the pipeline run.\n",
+ "- `enable_caching`: Whether to turn on caching for the run.\n",
+ "\n",
+ "Learn more about the `PipelineJob` class from [this documentation](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob).\n",
+ "\n",
+ "After creating, run the pipeline job using the configured `SERVICE_ACCOUNT`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "894afe1ba396"
+ },
+ "outputs": [],
+ "source": [
+ "job = aiplatform.PipelineJob(\n",
+ " display_name=PIPELINE_DISPLAY_NAME,\n",
+ " template_path=\"tabular_classification_pipeline.json\",\n",
+ " parameter_values=parameters,\n",
+ " enable_caching=True,\n",
+ ")\n",
+ "\n",
+ "job.run(service_account=SERVICE_ACCOUNT)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ce6beLsXASnK"
+ },
+ "source": [
+ "## Model Evaluation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "mKRTDi8ioXBY"
+ },
+ "source": [
+ "In the results from last step, click on the generated link to see your run in the Cloud Console.\n",
+ "\n",
+ "In the UI, many of the pipeline directed acyclic graph (DAG) nodes expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XcKaONSsGNC4"
+ },
+ "source": [
+ "### Get the Model Evaluation Results\n",
+ "\n",
+ "After the evalution pipeline is finished, run the below cell to print the evaluation metrics."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ec4ec00ab350"
+ },
+ "outputs": [],
+ "source": [
+ "# Iterate over the pipeline tasks\n",
+ "for task in job._gca_resource.job_detail.task_details:\n",
+ " # Obtain the artifacts from the evaluation task\n",
+ " if (\n",
+ " (\"model-evaluation\" in task.task_name)\n",
+ " and (\"model-evaluation-import\" not in task.task_name)\n",
+ " and (\n",
+ " task.state == aiplatform_v1.types.PipelineTaskDetail.State.SUCCEEDED\n",
+ " or task.state == aiplatform_v1.types.PipelineTaskDetail.State.SKIPPED\n",
+ " )\n",
+ " ):\n",
+ " evaluation_metrics = task.outputs.get(\"evaluation_metrics\").artifacts[0]\n",
+ " evaluation_metrics_gcs_uri = evaluation_metrics.uri\n",
+ "\n",
+ "print(evaluation_metrics)\n",
+ "print(evaluation_metrics_gcs_uri)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ca00512eb89f"
+ },
+ "source": [
+ "### Visualize the metrics\n",
+ "\n",
+ "Visualize the available metrics like `auRoc` and `logLoss` using a bar-chart."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "f9e38f73f838"
+ },
+ "outputs": [],
+ "source": [
+ "metrics = []\n",
+ "values = []\n",
+ "for i in evaluation_metrics.metadata.items():\n",
+ " metrics.append(i[0])\n",
+ " values.append(i[1])\n",
+ "plt.figure(figsize=(5, 3))\n",
+ "plt.bar(x=metrics, height=values)\n",
+ "plt.title(\"Evaluation Metrics\")\n",
+ "plt.ylabel(\"Value\")\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "049c9bbae2cb"
+ },
+ "source": [
+ "### Get the Feature Attributions\n",
+ "\n",
+ "Run the below cell to get the feature attributions. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "03ca8c149bc6"
+ },
+ "outputs": [],
+ "source": [
+ "# Iterate over the pipeline tasks\n",
+ "for task in job._gca_resource.job_detail.task_details:\n",
+ " # Obtain the artifacts from the feature attribution task\n",
+ " if (task.task_name == \"feature-attribution\") and (\n",
+ " task.state == aiplatform_v1.types.PipelineTaskDetail.State.SUCCEEDED\n",
+ " or task.state == aiplatform_v1.types.PipelineTaskDetail.State.SKIPPED\n",
+ " ):\n",
+ " feat_attrs = task.outputs.get(\"feature_attributions\").artifacts[0]\n",
+ " feat_attrs_gcs_uri = feat_attrs.uri\n",
+ "\n",
+ "print(feat_attrs)\n",
+ "print(feat_attrs_gcs_uri)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "719d2cd57d10"
+ },
+ "source": [
+ "From the obtained Cloud Storage uri for the feature attributions, get the attribution values."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "82e308dd8aca"
+ },
+ "outputs": [],
+ "source": [
+ "# Load the results\n",
+ "attributions = !gsutil cat $feat_attrs_gcs_uri\n",
+ "\n",
+ "# Print the results obtained\n",
+ "attributions = json.loads(attributions[0])\n",
+ "print(attributions)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5bfe517357f8"
+ },
+ "source": [
+ "### Visualize the Feature Attributions\n",
+ "\n",
+ "Visualize the obtained attributions for each feature using a bar-chart."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "d7a7dca9e3cc"
+ },
+ "outputs": [],
+ "source": [
+ "data = attributions[\"explanation\"][\"attributions\"][0][\"featureAttributions\"]\n",
+ "features = []\n",
+ "attr_values = []\n",
+ "for key, value in data.items():\n",
+ " features.append(key)\n",
+ " attr_values.append(value)\n",
+ "\n",
+ "plt.figure(figsize=(5, 3))\n",
+ "plt.bar(x=features, height=attr_values)\n",
+ "plt.title(\"Feature Attributions\")\n",
+ "plt.xticks(rotation=90)\n",
+ "plt.ylabel(\"Attribution value\")\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TpV-iwP9qw9c"
+ },
+ "source": [
+ "## Cleaning up\n",
+ "\n",
+ "To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
+ "project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
+ "\n",
+ "Otherwise, you can delete the individual resources you created in this tutorial.\n",
+ "\n",
+ "Set `delete_bucket` to **True** to create the Cloud Storage bucket created in this notebook."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "sx_vKniMq9ZX"
+ },
+ "outputs": [],
+ "source": [
+ "# Delete model resource\n",
+ "model.delete()\n",
+ "\n",
+ "# Delete the dataset resource\n",
+ "dataset.delete()\n",
+ "\n",
+ "# Delete the training job\n",
+ "train_job.delete()\n",
+ "\n",
+ "# Delete the evaluation pipeline\n",
+ "job.delete()\n",
+ "\n",
+ "# Delete Cloud Storage objects\n",
+ "delete_bucket = False\n",
+ "if delete_bucket or os.getenv(\"IS_TESTING\"):\n",
+ " ! gsutil -m rm -r $BUCKET_URI"
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "collapsed_sections": [],
+ "name": "automl_tabular_classification_model_evaluation.ipynb",
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/notebooks/community/model_evaluation/automl_tabular_regression_model_evaluation.ipynb b/notebooks/community/model_evaluation/automl_tabular_regression_model_evaluation.ipynb
new file mode 100644
index 000000000..bbeec9ec8
--- /dev/null
+++ b/notebooks/community/model_evaluation/automl_tabular_regression_model_evaluation.ipynb
@@ -0,0 +1,1483 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ur8xi4C7S06n"
+ },
+ "outputs": [],
+ "source": [
+ "# Copyright 2022 Google LLC\n",
+ "#\n",
+ "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
+ "# you may not use this file except in compliance with the License.\n",
+ "# You may obtain a copy of the License at\n",
+ "#\n",
+ "# https://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing, software\n",
+ "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
+ "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
+ "# See the License for the specific language governing permissions and\n",
+ "# limitations under the License."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JAPoU8Sm5E6e"
+ },
+ "source": [
+ "# Vertex AI Pipelines: Evaluating BatchPrediction results from AutoML Tabular Regression model\n",
+ "\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " Run in Colab\n",
+ " \n",
+ " | \n",
+ " \n",
+ " \n",
+ " \n",
+ " View on GitHub\n",
+ " \n",
+ " | \n",
+ " \n",
+ " \n",
+ " \n",
+ " Open in Vertex AI Workbench\n",
+ " \n",
+ " | \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tvgnzT1CKxrO"
+ },
+ "source": [
+ "## Overview\n",
+ "\n",
+ "This notebook demonstrates how to use Vertex AI regression model evaluation component to evaluate an AutoML regression model. Model evaluation helps you determine your model performance based on the evaluation metrics and improve the model if necessary. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "d975e698c9a4"
+ },
+ "source": [
+ "### Objective\n",
+ "\n",
+ "In this tutorial, you learn how to evaluate a Vertex AI model resource through a Vertex AI pipeline job using `google_cloud_pipeline_components`:\n",
+ "\n",
+ "This tutorial uses the following Google Cloud ML services and resources:\n",
+ "\n",
+ "- Vertex AI `AutoML`\n",
+ "- Vertex AI `TabularDataset` (AutoML)\n",
+ "- Vertex AI `AutoMLTabularTrainingJob`\n",
+ "- Vertex AI `BatchPrediction`\n",
+ "- Vertex AI `Pipeline`\n",
+ "- Vertex AI `Model Registry`\n",
+ "\n",
+ "\n",
+ "The steps performed include:\n",
+ "\n",
+ "- Create a Vertex AI Dataset\n",
+ "- Configure a `AutoMLTabularTrainingJob`\n",
+ "- Run the `AutoMLTabularTrainingJob` which returns a model\n",
+ "- Import a pre-trained `AutoML model resource` into the pipeline\n",
+ "- Run a `batch prediction` job\n",
+ "- Evaulate the AutoML model using the `regression evaluation component`\n",
+ "- Import the Classification Metrics to the AutoML model resource"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "08d289fa873f"
+ },
+ "source": [
+ "### Dataset\n",
+ "\n",
+ "The dataset being used in this notebook is a part of the PetFinder Dataset, available [here](https://www.kaggle.com/c/petfinder-adoption-prediction) on Kaggle. The current dataset is only a part of the original dataset considered for the problem of predicting age of the pet. It consists of the following fields:\n",
+ "\n",
+ "- `Type`: Type of animal (1 = Dog, 2 = Cat)\n",
+ "- `Age`: Age of pet when listed, in months\n",
+ "- `Breed1`: Primary breed of pet\n",
+ "- `Gender`: Gender of pet\n",
+ "- `Color1`: Color 1 of pet \n",
+ "- `Color2`: Color 2 of pet\n",
+ "- `MaturitySize`: Size at maturity (1 = Small, 2 = Medium, 3 = Large, 4 = Extra Large, 0 = Not Specified)\n",
+ "- `FurLength`: Fur length (1 = Short, 2 = Medium, 3 = Long, 0 = Not Specified)\n",
+ "- `Vaccinated`: Pet has been vaccinated (1 = Yes, 2 = No, 3 = Not Sure)\n",
+ "- `Sterilized`: Pet has been spayed / neutered (1 = Yes, 2 = No, 3 = Not Sure)\n",
+ "- `Health`: Health Condition (1 = Healthy, 2 = Minor Injury, 3 = Serious Injury, 0 = Not Specified)\n",
+ "- `Fee`: Adoption fee (0 = Free)\n",
+ "- `PhotoAmt`: Total uploaded photos for this pet\n",
+ "- `Adopted`: Whether or not the pet was adopted (Yes/No).\n",
+ "\n",
+ "**Note**: This dataset is moved to a public Cloud Storage bucket and is accessed from there in this notebook."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "aed92deeb4a0"
+ },
+ "source": [
+ "### Costs \n",
+ "This tutorial uses billable components of Google Cloud:\n",
+ "\n",
+ "* Vertex AI\n",
+ "* Cloud Storage\n",
+ "\n",
+ "Learn about [Vertex AI\n",
+ "pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n",
+ "pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
+ "Calculator](https://cloud.google.com/products/calculator/)\n",
+ "to generate a cost estimate based on your projected usage."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ze4-nDLfK4pw"
+ },
+ "source": [
+ "### Set up your local development environment\n",
+ "\n",
+ "**If you are using Colab or Vertex AI Workbench Notebooks**, your environment already meets\n",
+ "all the requirements to run this notebook. You can skip this step."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gCuSR8GkAgzl"
+ },
+ "source": [
+ "**Otherwise**, make sure your environment meets this notebook's requirements.\n",
+ "You need the following:\n",
+ "\n",
+ "* The Google Cloud SDK\n",
+ "* Git\n",
+ "* Python 3\n",
+ "* virtualenv\n",
+ "* Jupyter notebook running in a virtual environment with Python 3\n",
+ "\n",
+ "The Google Cloud guide to [Setting up a Python development\n",
+ "environment](https://cloud.google.com/python/setup) and the [Jupyter\n",
+ "installation guide](https://jupyter.org/install) provide detailed instructions\n",
+ "for meeting these requirements. The following steps provide a condensed set of\n",
+ "instructions:\n",
+ "\n",
+ "1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)\n",
+ "\n",
+ "1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)\n",
+ "\n",
+ "1. [Install\n",
+ " virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)\n",
+ " and create a virtual environment that uses Python 3. Activate the virtual environment.\n",
+ "\n",
+ "1. To install Jupyter, run `pip3 install jupyter` on the\n",
+ "command-line in a terminal shell.\n",
+ "\n",
+ "1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.\n",
+ "\n",
+ "1. Open this notebook in the Jupyter Notebook Dashboard."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "i7EUnXsZhAGF"
+ },
+ "source": [
+ "## Installation\n",
+ "\n",
+ "Install the following packages required to execute this notebook. \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2b4ef9b72d43"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "# The Vertex AI Workbench Notebook product has specific requirements\n",
+ "IS_WORKBENCH_NOTEBOOK = os.getenv(\"DL_ANACONDA_HOME\")\n",
+ "IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(\n",
+ " \"/opt/deeplearning/metadata/env_version\"\n",
+ ")\n",
+ "\n",
+ "# Vertex AI Notebook requires dependencies to be installed with '--user'\n",
+ "USER_FLAG = \"\"\n",
+ "if IS_WORKBENCH_NOTEBOOK:\n",
+ " USER_FLAG = \"--user\"\n",
+ "\n",
+ "! pip3 install --upgrade google-cloud-aiplatform {USER_FLAG} -q\n",
+ "! pip3 install google-cloud-pipeline-components==1.0.17 {USER_FLAG} -q\n",
+ "! pip3 install --upgrade kfp google-cloud-pipeline-components {USER_FLAG} -q\n",
+ "! pip3 install --upgrade matplotlib {USER_FLAG} -q"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "hhq5zEbGg0XX"
+ },
+ "source": [
+ "### Restart the kernel\n",
+ "\n",
+ "After you install the additional packages, you need to restart the notebook kernel so it can find the packages."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "EzrelQZ22IZj"
+ },
+ "outputs": [],
+ "source": [
+ "# Automatically restart kernel after installs\n",
+ "import os\n",
+ "\n",
+ "if not os.getenv(\"IS_TESTING\"):\n",
+ " # Automatically restart kernel after installs\n",
+ " import IPython\n",
+ "\n",
+ " app = IPython.Application.instance()\n",
+ " app.kernel.do_shutdown(True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "lWEdiXsJg0XY"
+ },
+ "source": [
+ "## Before you begin"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BF1j6f9HApxa"
+ },
+ "source": [
+ "### Set up your Google Cloud project\n",
+ "\n",
+ "**The following steps are required, regardless of your notebook environment.**\n",
+ "\n",
+ "1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.\n",
+ "\n",
+ "1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n",
+ "\n",
+ "1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com). \n",
+ "\n",
+ "1. If you are running this notebook locally, you need to install the [Cloud SDK](https://cloud.google.com/sdk).\n",
+ "\n",
+ "1. Enter your project ID in the cell below. Then run the cell to make sure the\n",
+ "Cloud SDK uses the right project for all the commands in this notebook.\n",
+ "\n",
+ "**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "WReHDGG5g0XY"
+ },
+ "source": [
+ "#### Set your project ID\n",
+ "\n",
+ "**If you don't know your project ID**, you may be able to get your project ID using `gcloud`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "oM1iC_MfAts1"
+ },
+ "outputs": [],
+ "source": [
+ "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "riG_qUokg0XZ"
+ },
+ "outputs": [],
+ "source": [
+ "if PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n",
+ " # Get your GCP project id from gcloud\n",
+ " shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n",
+ " PROJECT_ID = shell_output[0]\n",
+ " print(\"Project ID:\", PROJECT_ID)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "set_gcloud_project_id"
+ },
+ "outputs": [],
+ "source": [
+ "! gcloud config set project $PROJECT_ID"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "region"
+ },
+ "source": [
+ "#### Region\n",
+ "\n",
+ "You can also change the `REGION` variable, which is used for operations\n",
+ "throughout the rest of this notebook. Below are regions supported for Vertex AI. It is recommended that you choose the region closest to you.\n",
+ "\n",
+ "- Americas: `us-central1`\n",
+ "- Europe: `europe-west4`\n",
+ "- Asia Pacific: `asia-east1`\n",
+ "\n",
+ "You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\n",
+ "\n",
+ "Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "sduDOFQVF6kv"
+ },
+ "outputs": [],
+ "source": [
+ "REGION = \"[your-region]\" # @param {type: \"string\"}\n",
+ "\n",
+ "if REGION == \"[your-region]\":\n",
+ " REGION = \"us-central1\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "06571eb4063b"
+ },
+ "source": [
+ "#### UUID\n",
+ "\n",
+ "If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a uuid for each instance session, and append it onto the name of resources you create in this tutorial.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "697568e92bd6"
+ },
+ "outputs": [],
+ "source": [
+ "import random\n",
+ "import string\n",
+ "\n",
+ "\n",
+ "# Generate a uuid of a specifed length(default=8)\n",
+ "def generate_uuid(length: int = 8) -> str:\n",
+ " return \"\".join(random.choices(string.ascii_lowercase + string.digits, k=length))\n",
+ "\n",
+ "\n",
+ "UUID = generate_uuid()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dr--iN2kAylZ"
+ },
+ "source": [
+ "### Authenticate your Google Cloud account\n",
+ "\n",
+ "**If you are using Vertex AI Workbench Notebooks**, your environment is already\n",
+ "authenticated. Skip this step."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "sBCra4QMA2wR"
+ },
+ "source": [
+ "**If you are using Colab**, run the cell below and follow the instructions\n",
+ "when prompted to authenticate your account via oAuth.\n",
+ "\n",
+ "**Otherwise**, follow these steps:\n",
+ "\n",
+ "1. In the Cloud Console, go to the [**Create service account key**\n",
+ " page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).\n",
+ "\n",
+ "2. Click **Create service account**.\n",
+ "\n",
+ "3. In the **Service account name** field, enter a name, and\n",
+ " click **Create**.\n",
+ "\n",
+ "4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type \"Vertex AI\"\n",
+ "into the filter box, and select\n",
+ " **Vertex AI Administrator**. Type \"Storage Object Admin\" into the filter box, and select **Storage Object Admin**.\n",
+ "\n",
+ "5. Click *Create*. A JSON file that contains your key downloads to your\n",
+ "local environment.\n",
+ "\n",
+ "6. Enter the path to your service account key as the\n",
+ "`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "PyQmSRbKA8r-"
+ },
+ "outputs": [],
+ "source": [
+ "# If you are running this notebook in Colab, run this cell and follow the\n",
+ "# instructions to authenticate your GCP account. This provides access to your\n",
+ "# Cloud Storage bucket and lets you submit training jobs and prediction\n",
+ "# requests.\n",
+ "\n",
+ "import os\n",
+ "import sys\n",
+ "\n",
+ "# If on Vertex AI Workbench, then don't execute this code\n",
+ "IS_COLAB = \"google.colab\" in sys.modules\n",
+ "if not os.path.exists(\"/opt/deeplearning/metadata/env_version\") and not os.getenv(\n",
+ " \"DL_ANACONDA_HOME\"\n",
+ "):\n",
+ " if \"google.colab\" in sys.modules:\n",
+ " from google.colab import auth as google_auth\n",
+ "\n",
+ " google_auth.authenticate_user()\n",
+ "\n",
+ " # If you are running this notebook locally, replace the string below with the\n",
+ " # path to your service account key and run this cell to authenticate your GCP\n",
+ " # account.\n",
+ " elif not os.getenv(\"IS_TESTING\"):\n",
+ " %env GOOGLE_APPLICATION_CREDENTIALS ''"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zgPO1eR3CYjk"
+ },
+ "source": [
+ "### Create a Cloud Storage bucket\n",
+ "\n",
+ "**The following steps are required, regardless of your notebook environment.**\n",
+ "\n",
+ "When you run a Vertex AI pipeline job using the Cloud SDK, your job stores the pipeline artifacts to a Cloud Storage bucket. In this tutorial, you create a Vertex AI Pipeline job that saves the artifacts like evaluation metrics and feature attributes to a Cloud Storage bucket.\n",
+ "\n",
+ "Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "MzGDU7TWdts_"
+ },
+ "outputs": [],
+ "source": [
+ "BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n",
+ "BUCKET_URI = f\"gs://{BUCKET_NAME}\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "cf221059d072"
+ },
+ "outputs": [],
+ "source": [
+ "if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n",
+ " BUCKET_NAME = PROJECT_ID + \"aip-\" + UUID\n",
+ " BUCKET_URI = f\"gs://{BUCKET_NAME}\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-EcIXiGsCePi"
+ },
+ "source": [
+ "**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "NIq7R4HZCfIc"
+ },
+ "outputs": [],
+ "source": [
+ "! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ucvCsknMCims"
+ },
+ "source": [
+ "Finally, validate access to your Cloud Storage bucket by examining its contents:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "vhOb7YnwClBb"
+ },
+ "outputs": [],
+ "source": [
+ "! gsutil ls -al $BUCKET_URI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "set_service_account"
+ },
+ "source": [
+ "#### Service Account\n",
+ "\n",
+ "You use a service account to create Vertex AI Pipeline jobs. If you do not want to use your project's Compute Engine service account, set `SERVICE_ACCOUNT` to another service account ID."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "UwC1AdGeF6kx"
+ },
+ "outputs": [],
+ "source": [
+ "SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "autoset_service_account"
+ },
+ "outputs": [],
+ "source": [
+ "if (\n",
+ " SERVICE_ACCOUNT == \"\"\n",
+ " or SERVICE_ACCOUNT is None\n",
+ " or SERVICE_ACCOUNT == \"[your-service-account]\"\n",
+ "):\n",
+ " # Get your service account from gcloud\n",
+ " if not IS_COLAB:\n",
+ " shell_output = !gcloud auth list 2>/dev/null\n",
+ " SERVICE_ACCOUNT = shell_output[2].replace(\"*\", \"\").strip()\n",
+ "\n",
+ " else: # IS_COLAB:\n",
+ " shell_output = ! gcloud projects describe $PROJECT_ID\n",
+ " project_number = shell_output[-1].split(\":\")[1].strip().replace(\"'\", \"\")\n",
+ " SERVICE_ACCOUNT = f\"{project_number}-compute@developer.gserviceaccount.com\"\n",
+ "\n",
+ " print(\"Service Account:\", SERVICE_ACCOUNT)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "set_service_account:pipelines"
+ },
+ "source": [
+ "#### Set service account access for Vertex AI Pipelines\n",
+ "\n",
+ "Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step. You only need to run this step once per service account."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "6OqzKqhMF6kx"
+ },
+ "outputs": [],
+ "source": [
+ "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI\n",
+ "\n",
+ "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XoEqT2Y4DJmf"
+ },
+ "source": [
+ "### Import libraries"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "pRUOFELefqf1"
+ },
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "\n",
+ "import google.cloud.aiplatform as aiplatform\n",
+ "import kfp\n",
+ "import matplotlib.pyplot as plt\n",
+ "from google.cloud import aiplatform_v1\n",
+ "from kfp.v2 import compiler"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "init_aip:mbsdk,all"
+ },
+ "source": [
+ "### Initialize Vertex AI SDK for Python\n",
+ "\n",
+ "Initialize the Vertex AI SDK for Python for your project and corresponding bucket."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ksAefQcCF6ky"
+ },
+ "outputs": [],
+ "source": [
+ "aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BiVlyW5OUnjK"
+ },
+ "source": [
+ "## Create Vertex AI Dataset\n",
+ "\n",
+ "Create a managed tabular dataset resource in Vertex AI using the dataset source."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "bViYfWfpVAiF"
+ },
+ "outputs": [],
+ "source": [
+ "DATA_SOURCE = \"gs://cloud-samples-data/ai-platform-unified/datasets/tabular/petfinder-tabular-classification.csv\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "20S9En09X0PY"
+ },
+ "outputs": [],
+ "source": [
+ "# Create the Vertex AI Dataset resource\n",
+ "dataset = aiplatform.TabularDataset.create(\n",
+ " display_name=\"petfinder-tabular-dataset\",\n",
+ " gcs_source=DATA_SOURCE,\n",
+ ")\n",
+ "\n",
+ "print(\"Resource name:\", dataset.resource_name)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "A-QQkeUnq8Xt"
+ },
+ "source": [
+ "## Train AutoML model\n",
+ "\n",
+ "Train a simple regression model using the created dataset using `Age` as the target column. \n",
+ "\n",
+ "Set a display name and create the `AutoMLTabularTrainingJob` with appropriate data types specified for column transformations."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Bxn6ATUXrET6"
+ },
+ "outputs": [],
+ "source": [
+ "TRAINING_JOB_DISPLAY_NAME = \"[your-train-job-display-name]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2e7664fe3af6"
+ },
+ "outputs": [],
+ "source": [
+ "# If no display name is specified, use the default one\n",
+ "if (\n",
+ " TRAINING_JOB_DISPLAY_NAME == \"\"\n",
+ " or TRAINING_JOB_DISPLAY_NAME is None\n",
+ " or TRAINING_JOB_DISPLAY_NAME == \"[your-train-job-display-name]\"\n",
+ "):\n",
+ " TRAINING_JOB_DISPLAY_NAME = \"train-pet-agefinder-automl_\" + UUID"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6cb41277f4f3"
+ },
+ "source": [
+ "An AutoML training job is created with the `AutoMLTabularTrainingJob` class, with the following parameters:\n",
+ "\n",
+ "- `display_name`: The human readable name for the `TrainingJob` resource.\n",
+ "- `optimization_prediction_type`: The type of prediction the Model is to produce. Ex: regression,classification\n",
+ "- `column_transformations`: Transformations to apply to the input columns (i.e. columns other than the targetColumn). Each transformation may produce multiple result values from the column's value, and all are used for training. When creating transformation for BigQuery Struct column, the column should be flattened using \".\" as the delimiter. Only columns with no child should have a transformation. If an input column has no transformations on it, such a column is ignored by the training, except for the targetColumn, which should have no transformations defined on. Only one of column_transformations or column_specs should be passed. Consider using column_specs as column_transformations will be deprecated eventually. If none of column_transformations or column_specs is passed, the local credentials being used will try setting column_transformations to \"auto\". To do this, the local credentials require read access to the GCS or BigQuery training data source.\n",
+ "- `optimization_objective`: The optimization objective to minimize or maximize.\n",
+ " - `minimize-rmse`\n",
+ " - `minimize-mae`\n",
+ " - `minimize-rmsle`\n",
+ "\n",
+ "To learn more about `AutoMLTabularTrainingJob` click [here](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.AutoMLTabularTrainingJob) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "3l691PEMZFdA"
+ },
+ "outputs": [],
+ "source": [
+ "train_job = aiplatform.AutoMLTabularTrainingJob(\n",
+ " display_name=TRAINING_JOB_DISPLAY_NAME,\n",
+ " optimization_prediction_type=\"regression\",\n",
+ " column_specs={\n",
+ " \"Type\": \"categorical\",\n",
+ " \"Breed1\": \"categorical\",\n",
+ " \"Gender\": \"categorical\",\n",
+ " \"Color1\": \"categorical\",\n",
+ " \"Color2\": \"categorical\",\n",
+ " \"MaturitySize\": \"categorical\",\n",
+ " \"FurLength\": \"categorical\",\n",
+ " \"Vaccinated\": \"categorical\",\n",
+ " \"Sterilized\": \"categorical\",\n",
+ " \"Health\": \"categorical\",\n",
+ " \"Fee\": \"numeric\",\n",
+ " \"PhotoAmt\": \"numeric\",\n",
+ " \"Adopted\": \"categorical\",\n",
+ " },\n",
+ " optimization_objective=\"minimize-rmse\",\n",
+ ")\n",
+ "\n",
+ "print(train_job)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "391c51c98647"
+ },
+ "source": [
+ "Set the display name for the model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "454f077b984e"
+ },
+ "outputs": [],
+ "source": [
+ "MODEL_DISPLAY_NAME = \"[your-model-display-name]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "c4f338cdea7c"
+ },
+ "outputs": [],
+ "source": [
+ "# If no name is specified, use the default name\n",
+ "if (\n",
+ " MODEL_DISPLAY_NAME == \"\"\n",
+ " or MODEL_DISPLAY_NAME is None\n",
+ " or MODEL_DISPLAY_NAME == \"[your-model-display-name]\"\n",
+ "):\n",
+ " MODEL_DISPLAY_NAME = \"pet-agefinder-prediction-model_\" + UUID"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "de7e24205889"
+ },
+ "source": [
+ "Next, you start the training job by invoking the method `run`, with the following parameters:\n",
+ "\n",
+ "- `dataset`: The `Dataset` resource to train the model.\n",
+ "- `target_column`: The name of the column, whose values the model is to predict.\n",
+ "- `training_fraction_split`: The percentage of the dataset to use for training.\n",
+ "- `validation_fraction_split`: The percentage of the dataset to use for validation.\n",
+ "- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).\n",
+ "- `model_display_name`: The human readable name for the trained model.\n",
+ "- `disable_early_stopping`: If true, the entire budget is used.\n",
+ "- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5caae7fc10d9"
+ },
+ "source": [
+ "The training job takes roughly 1.5-2 hours to finish."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "IIfvPCGYyFCT"
+ },
+ "outputs": [],
+ "source": [
+ "# Run the training job\n",
+ "model = train_job.run(\n",
+ " dataset=dataset,\n",
+ " target_column=\"Age\",\n",
+ " training_fraction_split=0.8,\n",
+ " validation_fraction_split=0.1,\n",
+ " test_fraction_split=0.1,\n",
+ " model_display_name=MODEL_DISPLAY_NAME,\n",
+ " disable_early_stopping=False,\n",
+ " budget_milli_node_hours=1000,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "rYirKB_9yaa0"
+ },
+ "source": [
+ "## List model evaluations from training\n",
+ "\n",
+ "After the training job is finished, get the model evaluations and print them."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KkgCdQQAyZP1"
+ },
+ "outputs": [],
+ "source": [
+ "# Get evaluations\n",
+ "model_evaluations = model.list_model_evaluations()\n",
+ "\n",
+ "model_evaluation = list(model_evaluations)[0]\n",
+ "print(model_evaluation)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "3f4d0c17150d"
+ },
+ "outputs": [],
+ "source": [
+ "# Print the evaluation metrics\n",
+ "for evaluation in model_evaluations:\n",
+ " evaluation = evaluation.to_dict()\n",
+ " print(\"Model's evaluation metrics from Training:\\n\")\n",
+ " metrics = evaluation[\"metrics\"]\n",
+ " for metric in metrics.keys():\n",
+ " print(f\"metric: {metric}, value: {metrics[metric]}\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "581a188f0453"
+ },
+ "source": [
+ "## Create Pipeline for evaluations\n",
+ "\n",
+ "Now, you run a Vertex AI BatchPrediction job and generate evaluations and feature-attributions on its results. \n",
+ "\n",
+ "To do so, you create a Vertex AI pipeline using the components available from the [`google-cloud-pipeline-components`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/index.html) python package.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5adde0951eb5"
+ },
+ "source": [
+ "### Define the Pipeline\n",
+ "\n",
+ "While defining the flow of the pipeline, you get the model resource first. Then, you sample the provided source dataset for batch predictions and create a batch prediction. The explanations are enabled while creating the batch prediction job to generate feature attributions. Once the batch prediction job is completed, you get the regression evaluation metrics and the feature attributions from the results.\n",
+ "\n",
+ "The pipeline uses the following components:\n",
+ "\n",
+ "- `GetVertexModelOp`: Gets a Vertex Model Artifact. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.GetVertexModelOp).\n",
+ "- `EvaluationDataSamplerOp`: Randomly downsamples an input dataset to a specified size for computing Vertex XAI feature attributions for AutoML Tables and custom models. Creates a Dataflow job with Apache Beam to downsample the dataset. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.EvaluationDataSamplerOp).\n",
+ "- `ModelBatchPredictOp`: Creates a Google Cloud Vertex BatchPredictionJob and waits for it to complete. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.aiplatform.html#google_cloud_pipeline_components.aiplatform.ModelBatchPredictOp).\n",
+ "- `ModelEvaluationRegressionOp`: Compute evaluation metrics on a trained model’s batch prediction results. Creates a Dataflow job with Apache Beam and TFMA to compute evaluation metrics. Supports regression for tabular data.[here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.ModelEvaluationRegressionOp).\n",
+ "- `ModelEvaluationFeatureAttributionOp`: Compute feature attribution on a trained model’s batch explanation results. Creates a Dataflow job with Apache Beam and TFMA to compute feature attributions. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.ModelEvaluationFeatureAttributionOp).\n",
+ "- `ModelImportEvaluationOp`: Imports a model evaluation artifact to an existing Vertex model with ModelService.ImportModelEvaluation. For more details, please check [here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.17/google_cloud_pipeline_components.experimental.evaluation.html#google_cloud_pipeline_components.experimental.evaluation.ModelImportEvaluationOp)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ce6beLsXASnK"
+ },
+ "source": [
+ "## Model Evaluation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ktMsqtibAUzz"
+ },
+ "outputs": [],
+ "source": [
+ "@kfp.dsl.pipeline(\n",
+ " name=\"vertex-evaluation-automl-tabular-regression-feature-attribution\"\n",
+ ")\n",
+ "def evaluation_automl_tabular_feature_attribution_pipeline(\n",
+ " project: str,\n",
+ " location: str,\n",
+ " root_dir: str,\n",
+ " model_name: str,\n",
+ " target_column_name: str,\n",
+ " batch_predict_gcs_source_uris: list,\n",
+ " batch_predict_instances_format: str,\n",
+ " batch_predict_predictions_format: str = \"jsonl\",\n",
+ " batch_predict_machine_type: str = \"n1-standard-4\",\n",
+ " batch_predict_explanation_metadata: dict = {},\n",
+ " batch_predict_explanation_parameters: dict = {},\n",
+ " batch_predict_explanation_data_sample_size: int = 10000,\n",
+ " dataflow_max_num_workers: int = 5,\n",
+ " dataflow_use_public_ips: bool = True,\n",
+ " encryption_spec_key_name: str = \"\",\n",
+ "):\n",
+ "\n",
+ " from google_cloud_pipeline_components.aiplatform import ModelBatchPredictOp\n",
+ " from google_cloud_pipeline_components.experimental.evaluation import (\n",
+ " EvaluationDataSamplerOp, GetVertexModelOp,\n",
+ " ModelEvaluationFeatureAttributionOp, ModelEvaluationRegressionOp,\n",
+ " ModelImportEvaluationOp)\n",
+ "\n",
+ " # Get the Vertex AI model resource\n",
+ " get_model_task = GetVertexModelOp(model_resource_name=model_name)\n",
+ "\n",
+ " # Run Data-sampling task\n",
+ " data_sampler_task = EvaluationDataSamplerOp(\n",
+ " project=project,\n",
+ " location=location,\n",
+ " root_dir=root_dir,\n",
+ " gcs_source_uris=batch_predict_gcs_source_uris,\n",
+ " instances_format=batch_predict_instances_format,\n",
+ " sample_size=batch_predict_explanation_data_sample_size,\n",
+ " )\n",
+ "\n",
+ " # Run Batch Explanations\n",
+ " batch_explain_task = ModelBatchPredictOp(\n",
+ " project=project,\n",
+ " location=location,\n",
+ " model=get_model_task.outputs[\"model\"],\n",
+ " job_display_name=\"model-registry-batch-predict-evaluation\",\n",
+ " gcs_source_uris=data_sampler_task.outputs[\"gcs_output_directory\"],\n",
+ " instances_format=batch_predict_instances_format,\n",
+ " predictions_format=batch_predict_predictions_format,\n",
+ " gcs_destination_output_uri_prefix=root_dir,\n",
+ " machine_type=batch_predict_machine_type,\n",
+ " encryption_spec_key_name=encryption_spec_key_name,\n",
+ " # Set the explanation parameters\n",
+ " generate_explanation=True,\n",
+ " explanation_parameters=batch_predict_explanation_parameters,\n",
+ " explanation_metadata=batch_predict_explanation_metadata,\n",
+ " )\n",
+ "\n",
+ " # Run evaluation based on prediction type and feature attribution component.\n",
+ " # After, import the model evaluations to the Vertex model.\n",
+ " eval_task = ModelEvaluationRegressionOp(\n",
+ " project=project,\n",
+ " location=location,\n",
+ " root_dir=root_dir,\n",
+ " ground_truth_column=target_column_name,\n",
+ " predictions_gcs_source=batch_explain_task.outputs[\"gcs_output_directory\"],\n",
+ " predictions_format=batch_predict_predictions_format,\n",
+ " dataflow_max_workers_num=dataflow_max_num_workers,\n",
+ " dataflow_use_public_ips=dataflow_use_public_ips,\n",
+ " encryption_spec_key_name=encryption_spec_key_name,\n",
+ " )\n",
+ "\n",
+ " # Get Feature Attributions\n",
+ " feature_attribution_task = ModelEvaluationFeatureAttributionOp(\n",
+ " project=project,\n",
+ " location=location,\n",
+ " root_dir=root_dir,\n",
+ " predictions_format=\"jsonl\",\n",
+ " predictions_gcs_source=batch_explain_task.outputs[\"gcs_output_directory\"],\n",
+ " dataflow_max_workers_num=dataflow_max_num_workers,\n",
+ " dataflow_use_public_ips=dataflow_use_public_ips,\n",
+ " encryption_spec_key_name=encryption_spec_key_name,\n",
+ " )\n",
+ "\n",
+ " ModelImportEvaluationOp(\n",
+ " regression_metrics=eval_task.outputs[\"evaluation_metrics\"],\n",
+ " feature_attributions=feature_attribution_task.outputs[\"feature_attributions\"],\n",
+ " model=get_model_task.outputs[\"model\"],\n",
+ " dataset_type=batch_predict_instances_format,\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "a712dfa762ee"
+ },
+ "source": [
+ "### Compile the pipeline\n",
+ "\n",
+ "Next, compile the pipline to the `tabular_regression_pipline.json` file."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "NOvOMTEgCVcW"
+ },
+ "outputs": [],
+ "source": [
+ "compiler.Compiler().compile(\n",
+ " pipeline_func=evaluation_automl_tabular_feature_attribution_pipeline,\n",
+ " package_path=\"tabular_regression_pipeline.json\",\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bdd9e2fd6841"
+ },
+ "source": [
+ "### Define the parameters to run the pipeline\n",
+ "\n",
+ "Specify the required parameters to run the pipeline.\n",
+ "\n",
+ "Set a display name for your pipeline."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "8f17c5c7b3e3"
+ },
+ "outputs": [],
+ "source": [
+ "PIPELINE_DISPLAY_NAME = \"[your-pipeline-display-name]\" # @param {type:\"string\"}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "1aa7d7bbb1c9"
+ },
+ "outputs": [],
+ "source": [
+ "# If no display name is set, use the default one\n",
+ "if (\n",
+ " PIPELINE_DISPLAY_NAME == \"[your-pipeline-display-name]\"\n",
+ " or PIPELINE_DISPLAY_NAME == \"\"\n",
+ " or PIPELINE_DISPLAY_NAME is None\n",
+ "):\n",
+ " PIPELINE_DISPLAY_NAME = \"pet_agefinder_\" + UUID"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "90f424d5dca0"
+ },
+ "source": [
+ "To pass the required arguments to the pipeline, you define the following paramters below:\n",
+ "\n",
+ "- `project`: Project ID.\n",
+ "- `location`: Region where the pipeline is run.\n",
+ "- `root_dir`: The GCS directory for keeping staging files and artifacts. A random subdirectory will be created under the directory to keep job info for resuming the job in case of failure.\n",
+ "- `model_name`: Resource name of the trained AutoML Tabular Regression model.\n",
+ "- `target_column_name`: Name of the column to be used as the target for regression.\n",
+ "- `batch_predict_gcs_source_uris`: List of the Cloud Storage bucket uris of input instances for batch prediction.\n",
+ "- `batch_predict_instances_format`: Format of the input instances for batch prediction. Can be \"jsonl\", \"csv\" or \"bigquery\".\n",
+ "- `batch_predict_explanation_data_sample_size`: Size of the samples to be considered for batch prediction and evaluation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "JeSiA6-TSgV8"
+ },
+ "outputs": [],
+ "source": [
+ "PIPELINE_ROOT = f\"{BUCKET_URI}/pipeline_root/pet_agefinder_{UUID}\"\n",
+ "parameters = {\n",
+ " \"project\": PROJECT_ID,\n",
+ " \"location\": REGION,\n",
+ " \"root_dir\": PIPELINE_ROOT,\n",
+ " \"model_name\": model.resource_name,\n",
+ " \"target_column_name\": \"Age\",\n",
+ " \"batch_predict_gcs_source_uris\": [DATA_SOURCE],\n",
+ " \"batch_predict_instances_format\": \"csv\",\n",
+ " \"batch_predict_explanation_data_sample_size\": 3000,\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "859fa6611d9a"
+ },
+ "source": [
+ "Next, you create the pipeline job, with the following parameters:\n",
+ "\n",
+ "- `display_name`: The user-defined name of this Pipeline.\n",
+ "- `template_path`: The path of PipelineJob or PipelineSpec JSON or YAML file. It can be a local path, a Google Cloud Storage URI (e.g. \"gs://project.name\"), or an Artifact Registry URI (e.g. \"https://us-central1-kfp.pkg.dev/proj/repo/pack/latest\").\n",
+ "- `parameter_values`: The mapping from runtime parameter names to its values that control the pipeline run.\n",
+ "- `enable_caching`: Whether to turn on caching for the run. If this is not set, defaults to the compile time settings, which are True for all tasks by default, while users may specify different caching options for individual tasks. If this is set, the setting applies to all tasks in the pipeline. Overrides the compile time settings.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "e8dce0638349"
+ },
+ "source": [
+ "Run the pipeline using the configured `SERVICE_ACCOUNT`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "pdHib_yUEuEk"
+ },
+ "outputs": [],
+ "source": [
+ "job = aiplatform.PipelineJob(\n",
+ " display_name=PIPELINE_DISPLAY_NAME,\n",
+ " template_path=\"tabular_regression_pipeline.json\",\n",
+ " parameter_values=parameters,\n",
+ " enable_caching=True,\n",
+ ")\n",
+ "\n",
+ "job.run(service_account=SERVICE_ACCOUNT)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "625960707c60"
+ },
+ "source": [
+ "## Model Evaluation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "mKRTDi8ioXBY"
+ },
+ "source": [
+ "In the results from last step, click on the generated link to see your run in the Cloud Console.\n",
+ "\n",
+ "In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "U2zocUvk2YVs"
+ },
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XcKaONSsGNC4"
+ },
+ "source": [
+ "### Get the Model Evaluation Results\n",
+ "\n",
+ "After the evalution pipeline is finished, run the below cell to print the evaluation metrics."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "mtHA8rhGGQv3"
+ },
+ "outputs": [],
+ "source": [
+ "# Iterate over the pipeline tasks\n",
+ "for task in job._gca_resource.job_detail.task_details:\n",
+ " # Obtain the artifacts from the evaluation task\n",
+ " if (\n",
+ " (\"model-evaluation\" in task.task_name)\n",
+ " and (\"model-evaluation-import\" not in task.task_name)\n",
+ " and (\n",
+ " task.state == aiplatform_v1.types.PipelineTaskDetail.State.SUCCEEDED\n",
+ " or task.state == aiplatform_v1.types.PipelineTaskDetail.State.SKIPPED\n",
+ " )\n",
+ " ):\n",
+ " evaluation_metrics = task.outputs.get(\"evaluation_metrics\").artifacts[\n",
+ " 0\n",
+ " ] # ['artifacts']\n",
+ " evaluation_metrics_gcs_uri = evaluation_metrics.uri\n",
+ "\n",
+ "print(evaluation_metrics)\n",
+ "print(evaluation_metrics_gcs_uri)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "e69f183f902b"
+ },
+ "source": [
+ "### Visualize the metrics\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "b7c5e5c35ee9"
+ },
+ "outputs": [],
+ "source": [
+ "metrics = []\n",
+ "values = []\n",
+ "for i in evaluation_metrics.metadata.items():\n",
+ " if (\n",
+ " i[0] == \"meanAbsolutePercentageError\"\n",
+ " ): # we are not considering MAPE as it is infinite. MAPE is infinite if groud truth is 0 as in our case Age is 0 for some instances.\n",
+ " continue\n",
+ " metrics.append(i[0])\n",
+ " values.append(i[1])\n",
+ "plt.figure(figsize=(10, 5))\n",
+ "plt.bar(x=metrics, height=values)\n",
+ "plt.title(\"Evaluation Metrics\")\n",
+ "plt.ylabel(\"Value\")\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "c26ad3958895"
+ },
+ "source": [
+ "### Get the Feature Attributions\n",
+ "\n",
+ "Feature attributions indicate how much each feature in your model contributed to the predictions for each given instance.\n",
+ "\n",
+ "To learn more about Feature Attributions click [here](https://cloud.google.com/vertex-ai/docs/explainable-ai/overview#feature_attributions)\n",
+ "\n",
+ "Run the below cell to get the feature attributions. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "b09056628b26"
+ },
+ "outputs": [],
+ "source": [
+ "# Iterate over the pipeline tasks\n",
+ "for task in job._gca_resource.job_detail.task_details:\n",
+ " # Obtain the artifacts from the feature-attribution task\n",
+ " if (task.task_name == \"feature-attribution\") and (\n",
+ " task.state == aiplatform_v1.types.PipelineTaskDetail.State.SUCCEEDED\n",
+ " or task.state == aiplatform_v1.types.PipelineTaskDetail.State.SKIPPED\n",
+ " ):\n",
+ " feat_attrs = task.outputs.get(\"feature_attributions\").artifacts[0]\n",
+ " feat_attrs_gcs_uri = feat_attrs.uri\n",
+ "\n",
+ "print(feat_attrs)\n",
+ "print(feat_attrs_gcs_uri)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4d9d6a82d826"
+ },
+ "source": [
+ "From the obtained Cloud Storage uri for the feature attributions, get the attribution values."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "c26a2091f4fc"
+ },
+ "outputs": [],
+ "source": [
+ "# Load the results\n",
+ "attributions = !gsutil cat $feat_attrs_gcs_uri\n",
+ "\n",
+ "# Print the results obtained\n",
+ "attributions = json.loads(attributions[0])\n",
+ "print(attributions)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "77151be8d776"
+ },
+ "source": [
+ "### Visualize the Feature Attributions\n",
+ "\n",
+ "Visualize the obtained attributions for each feature using a bar-chart."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "069bf017e0de"
+ },
+ "outputs": [],
+ "source": [
+ "data = attributions[\"explanation\"][\"attributions\"][0][\"featureAttributions\"]\n",
+ "features = []\n",
+ "attr_values = []\n",
+ "for key, value in data.items():\n",
+ " features.append(key)\n",
+ " attr_values.append(value)\n",
+ "\n",
+ "plt.figure(figsize=(5, 3))\n",
+ "plt.bar(x=features, height=attr_values)\n",
+ "plt.title(\"Feature Attributions\")\n",
+ "plt.xticks(rotation=90)\n",
+ "plt.ylabel(\"Attribution value\")\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TpV-iwP9qw9c"
+ },
+ "source": [
+ "## Cleaning up\n",
+ "\n",
+ "To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\n",
+ "project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
+ "\n",
+ "Otherwise, you can delete the individual resources you created in this tutorial.\n",
+ "\n",
+ "Set `delete_bucket` to **True** to create the Cloud Storage bucket created in this notebook."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "sx_vKniMq9ZX"
+ },
+ "outputs": [],
+ "source": [
+ "# Delete model resource\n",
+ "model.delete()\n",
+ "\n",
+ "# Delete the dataset resource\n",
+ "dataset.delete()\n",
+ "\n",
+ "# Delete the training job\n",
+ "train_job.delete()\n",
+ "\n",
+ "# Delete the evaluation pipeline\n",
+ "job.delete()\n",
+ "\n",
+ "# Delete Cloud Storage objects\n",
+ "delete_bucket = False\n",
+ "if delete_bucket or os.getenv(\"IS_TESTING\"):\n",
+ " ! gsutil -m rm -r $BUCKET_URI"
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "collapsed_sections": [],
+ "name": "automl_tabular_regression_model_evaluation.ipynb",
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/notebooks/community/model_evaluation/images/automl_tabular_classification_evaluation_pipeline.PNG b/notebooks/community/model_evaluation/images/automl_tabular_classification_evaluation_pipeline.PNG
new file mode 100644
index 000000000..80e59879f
Binary files /dev/null and b/notebooks/community/model_evaluation/images/automl_tabular_classification_evaluation_pipeline.PNG differ
diff --git a/notebooks/community/model_evaluation/images/automl_tabular_regression_evaluation_pipeline.PNG b/notebooks/community/model_evaluation/images/automl_tabular_regression_evaluation_pipeline.PNG
new file mode 100644
index 000000000..10cc8a238
Binary files /dev/null and b/notebooks/community/model_evaluation/images/automl_tabular_regression_evaluation_pipeline.PNG differ