Manage datasets

This document describes how to copy datasets, recreate datasets in another location, secure datasets, delete datasets, and restore tables from deleted datasets in BigQuery. For information about how to restore (or undelete) a deleted dataset, see Restore deleted datasets.

As a BigQuery administrator, you can organize and control access to tables and views that analysts use. For more information about datasets, see Introduction to datasets.

You cannot change the name of an existing dataset or relocate a dataset after it's created. As a workaround for changing the dataset name, you can copy a dataset and change the destination dataset's name. To relocate a dataset, you can follow one of the following methods:

Required roles

This section describes the roles and permissions that you need to manage datasets. If your source or destination dataset is in the same project as the one you are using to copy, then you don't need extra permissions or roles on that dataset.

Copy a dataset

Grant these roles to copy a dataset. Copying datasets is currently in (Beta).

To get the permissions that you need to copy datasets, ask your administrator to grant you the following IAM roles:

  • BigQuery Admin (roles/bigquery.admin) - the destination project
  • BigQuery Data Viewer (roles/bigquery.dataViewer) - the source dataset
  • BigQuery Data Editor (roles/bigquery.dataEditor) - the destination dataset

For more information about granting roles, see Manage access to projects, folders, and organizations.

These predefined roles contain the permissions required to copy datasets. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to copy datasets:

  • bigquery.transfers.update on the destination project
  • bigquery.jobs.create on the destination project
  • bigquery.datasets.get on the source and destination dataset
  • bigquery.tables.list on the source and destination dataset
  • bigquery.datasets.update on the destination dataset
  • bigquery.tables.create on the destination dataset

You might also be able to get these permissions with custom roles or other predefined roles.

Delete a dataset

Grant these roles to delete a dataset.

To get the permissions that you need to delete datasets, ask your administrator to grant you the BigQuery Data Owner (roles/bigquery.dataOwner) IAM role on project. For more information about granting roles, see Manage access to projects, folders, and organizations.

This predefined role contains the permissions required to delete datasets. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to delete datasets:

  • bigquery.datasets.delete on the project
  • bigquery.tables.delete on the project

You might also be able to get these permissions with custom roles or other predefined roles.

Copy datasets

You can copy a dataset, including partitioned data within a region or across regions, without extracting, moving, or reloading data into BigQuery. BigQuery uses the BigQuery Data Transfer Service in the backend to copy datasets. For location considerations when you transfer data, see Data location and transfers.

For each dataset copy configuration, you can have one transfer run active at a time. Additional transfer runs are queued. If you are using the Google Cloud console, you can schedule recurring copies, and configure an email or Pub/Sub notifications with the BigQuery Data Transfer Service.

Limitations

The following limitations apply when you copy datasets:

  • You can't copy the following resources from a source dataset:

    • Views.
    • Routines, including UDFs.
    • External tables.
    • Change data capture (CDC) tables if the copy job is across regions. Copying CDC tables within the same region is supported.
    • Cross-region table copy job is not supported for tables encrypted with customer-managed encrypted keys (CMEK) when the destination dataset is not encrypted with CMEK and there is no CMEK provided. Copying tables with default encryption across regions is supported.

      You can copy all encrypted tables within the same region, including tables encrypted with CMEK.

  • You can't use the following resources as destination datasets for copy jobs:

    • Write-optimized storage.
    • Dataset encrypted with CMEK if the copy job is across regions and the source table is not encrypted with CMEK.

      However, tables encrypted with CMEK are allowed as destination tables when copying within the same region.

  • The minimum frequency between copy jobs is 12 hours.

  • Appending data to a partitioned or non-partitioned table in the destination dataset isn't supported. If there are no changes in the source table, the table is skipped. If the source table is updated, the destination table is completely truncated and replaced.

  • If a table exists in the source dataset and the destination dataset, and the source table has not changed since the last successful copy, it's skipped. The source table is skipped even if the Overwrite destination tables checkbox is selected.

  • When truncating tables in the destination dataset, the dataset copy job doesn't detect any changes made to resources in the destination dataset before it begins the copy job. The dataset copy job overwrites all of the data in the destination dataset, including both the tables and schema.

  • The destination table might not reflect changes made to the source tables after a copy job starts.

  • Copying a dataset is not supported in BigQuery Omni regions.

  • To copy a dataset to a project in another VPC Service Controls service perimeter, you need to set the following egress rules:

    • In the destination project's VPC Service Controls service perimeter configuration, the IAM principal must have the following methods:

      • bigquery.datasets.get
      • bigquery.tables.list
      • bigquery.tables.get,
      • bigquery.tables.getData
    • In the source project's VPC Service Controls service perimeter configuration, the IAM principal being used must have the method set to All Methods.

  • If you try to update a dataset copy transfer configuration you don't own, the update might fail with the following error message:

    Cannot modify restricted parameters without taking ownership of the transfer configuration.

    The owner of the dataset copy is the user associated with the dataset copy or the user who has access to the service account associated with the dataset copy. The associated user can be seen in the configuration details of the dataset copy. For information on how to update the dataset copy to take ownership, see Update credentials. To grant users access to a service account, you must have the Service Account user role.

    The owner restricted parameters for dataset copies are:

    • Source project
    • Source dataset
    • Destination dataset
    • Overwrite destination table setting
  • All cross-region table copy limitations apply.

Copy a dataset

Select one of the following options:

Console

  1. Enable the BigQuery Data Transfer Service for your destination dataset.

    Enable the BigQuery Data Transfer Service API

  2. Ensure that you have the required roles.

    If you intend to set up transfer run notifications for Pub/Sub (Option 2 later in these steps), then you must have the pubsub.topics.setIamPolicy permission.

    If you only set up email notifications, then Pub/Sub permissions are not required. For more information, see the BigQuery Data Transfer Service run notifications.

  3. Create a BigQuery dataset in the same region or a different region from your source dataset.

Option 1: Use the BigQuery copy function

To create a one-time transfer, use the BigQuery copy function:

  1. Go to the BigQuery page.

    Go to BigQuery

  2. In the Explorer panel, expand your project and select a dataset.

  3. In the Dataset info section, click Copy, and then do the following:

    1. In the Dataset field, either create a new dataset or select an existing dataset ID from the list.

      Dataset names within a project must be unique. The project and dataset can be in different regions, but not all regions are supported for cross-region dataset copying.

      In the Location field, the location of the source dataset is displayed.

    2. Optional: To overwrite both the data and schema of the destination tables with the source tables, select the Overwrite destination tables checkbox. Both the source and destination tables must have the same partitioning schema.

    3. To copy the dataset, click Copy.

Option 2: Use the BigQuery Data Transfer Service

To schedule recurring copies and configure email or Pub/Sub notifications, use the BigQuery Data Transfer Service in the Google Cloud console of the destination project:

  1. Go to the Data transfers page.

    Go to Data transfers

  2. Click Create a transfer.

  3. In the Source list, select Dataset Copy.

  4. In the Display name field, enter a name for your transfer run.

  5. In the Schedule options section, do the following:

    1. For Repeat frequency, choose an option for how often to run the transfer:

      If you select Custom, enter a custom frequency—for example, every day 00:00. For more information, see Formatting the schedule.

    2. For Start date and run time, enter the date and time to start the transfer. If you choose Start now, this option is disabled.

  6. In the Destination settings section, select a destination dataset to store your transfer data. You can also click CREATE NEW DATASET to create a new dataset before you select it for this transfer.

  7. In the Data source details section, enter the following information:

    1. For Source dataset, enter the dataset ID that you want to copy.
    2. For Source project, enter the project ID of your source dataset.
  8. To overwrite both the data and schema of the destination tables with the source tables, select the Overwrite destination tables checkbox. Both the source and destination tables must have the same partitioning schema.

  9. In the Service Account menu, select a service account from the service accounts associated with your Google Cloud project. You can associate a service account with your transfer instead of using your user credentials. For more information about using service accounts with data transfers, see Use service accounts.

    • If you signed in with a federated identity, then a service account is required to create a transfer. If you signed in with a Google Account, then a service account for the transfer is optional.
    • The service account must have the required roles.
  10. Optional: In the Notification options section, do the following:

    • To enable email notifications, click the toggle. When you enable this option, the owner of the transfer configuration receives an email notification when a transfer run fails.
    • To enable Pub/Sub notifications, click the toggle, and then either select a topic name from the list or click Create a topic. This option configures Pub/Sub run notifications for your transfer.
  11. Click Save.

bq

  1. Enable the BigQuery Data Transfer Service for your destination dataset.

  2. Ensure that you have the required roles.

  3. To create a BigQuery dataset, use the bq mk command with the dataset creation flag --dataset and the location flag:

    bq mk \
      --dataset \
      --location=LOCATION \
      PROJECT:DATASET

    Replace the following:

    • LOCATION: the location where you want to copy the dataset
    • PROJECT: the project ID of your target dataset
    • DATASET: the name of the target dataset
  4. To copy a dataset, use the bq mk command with the transfer creation flag --transfer_config and the --data_source flag. You must set the --data_source flag to cross_region_copy. For a complete list of valid values for the --data_source flag, see the transfer-config flags in the bq command-line tool reference.

    bq mk \
      --transfer_config \
      --project_id=PROJECT \
      --data_source=cross_region_copy \
      --target_dataset=DATASET \
      --display_name=NAME \
     --service_account_name=SERCICE_ACCOUNT \
      --params='PARAMETERS'

    Replace the following:

    • NAME: the display name for the copy job or the transfer configuration

    • SERVICE_ACCOUNT: the service account name used to authenticate your transfer. The service account should be owned by the same project_id used to create the transfer and it should have all of the required permissions.

    • PARAMETERS: the parameters for the transfer configuration in the JSON format

      Parameters for a dataset copy configuration include the following:

      • source_dataset_id: the ID of the source dataset that you want to copy
      • source_project_id: the ID of the project that your source dataset is in
      • overwrite_destination_table: an optional flag that lets you truncate the tables of a previous copy and refresh all the data

      Both the source and destination tables must have the same partitioning schema.

    The following examples show the formatting of the parameters, based on your system's environment:

    • Linux: use single quotes to enclose the JSON string–for example:

      '{"source_dataset_id":"mydataset","source_project_id":"mysourceproject","overwrite_destination_table":"true"}'
      
    • Windows command line: use double quotes to enclose the JSON string, and escape double quotes in the string with a backslash–for example:

      "{\"source_dataset_id\":\"mydataset\",\"source_project_id\":\"mysourceproject\",\"overwrite_destination_table\":\"true\"}"
      
    • PowerShell: use single quotes to enclose the JSON string, and escape double quotes in the string with a backslash–for example:

      '{\"source_dataset_id\":\"mydataset\",\"source_project_id\":\"mysourceproject\",\"overwrite_destination_table\":\"true\"}'
      

    For example, the following command creates a dataset copy configuration that's named My Transfer with a target dataset that's named mydataset and a project with the ID of myproject.

    bq mk \
      --transfer_config \
      --project_id=myproject \
      --data_source=cross_region_copy \
      --target_dataset=mydataset \
      --display_name='My Transfer' \
      --params='{
          "source_dataset_id":"123_demo_eu",
          "source_project_id":"mysourceproject",
          "overwrite_destination_table":"true"
          }'

API

  1. Enable the BigQuery Data Transfer Service for your destination dataset.

  2. Ensure that you have the required roles.

  3. To create a BigQuery dataset, call the datasets.insert method with a defined dataset resource.

  4. To copy a dataset, use the projects.locations.transferConfigs.create method and supply an instance of the TransferConfig resource.

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.bigquery.datatransfer.v1.CreateTransferConfigRequest;
import com.google.cloud.bigquery.datatransfer.v1.DataTransferServiceClient;
import com.google.cloud.bigquery.datatransfer.v1.ProjectName;
import com.google.cloud.bigquery.datatransfer.v1.TransferConfig;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

// Sample to copy dataset from another gcp project
public class CopyDataset {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    final String destinationProjectId = "MY_DESTINATION_PROJECT_ID";
    final String destinationDatasetId = "MY_DESTINATION_DATASET_ID";
    final String sourceProjectId = "MY_SOURCE_PROJECT_ID";
    final String sourceDatasetId = "MY_SOURCE_DATASET_ID";
    Map<String, Value> params = new HashMap<>();
    params.put("source_project_id",