BigQuery - Package cloud.google.com/go/bigquery (v1.57.1)

Package bigquery provides a client for the BigQuery service.

The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs.

See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package.

Creating a Client

To start working with this package, create a client:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, projectID)
if err != nil {
    // TODO: Handle error.
}

Querying

To query existing tables, create a Query and call its Read method:

q := client.Query(`
    SELECT year, SUM(number) as num
    FROM ` + "`bigquery-public-data.usa_names.usa_1910_2013`" + `
    WHERE name = "William"
    GROUP BY year
    ORDER BY year
`)
it, err := q.Read(ctx)
if err != nil {
    // TODO: Handle error.
}

Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value. A slice is simplest:

for {
    var values []bigquery.Value
    err := it.Next(&values)
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(values)
}

You can also use a struct whose exported fields match the query:

type Count struct {
    Year int
    Num  int
}
for {
    var c Count
    err := it.Next(&c)
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(c)
}

You can also start the query running and get the results later. Create the query as above, but call Run instead of Read. This returns a Job, which represents an asynchronous operation.

job, err := q.Run(ctx)
if err != nil {
    // TODO: Handle error.
}

Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process.

jobID := job.ID()
fmt.Printf("The job ID is %s\n", jobID)

To retrieve the job's results from the ID, first look up the Job:

job, err = client.JobFromID(ctx, jobID)
if err != nil {
    // TODO: Handle error.
}

Use the Job.Read method to obtain an iterator, and loop over the rows. Calling Query.Read is preferred for queries with a relatively small result set, as it will call BigQuery jobs.query API for a optimized query path. If the query doesn't meet that criteria, the method will just combine Query.Run and Job.Read.

it, err = job.Read(ctx)
if err != nil {
    // TODO: Handle error.
}
// Proceed with iteration as above.

Datasets and Tables

You can refer to datasets in the client's project with the Dataset method, and in other projects with the DatasetInProject method:

myDataset := client.Dataset("my_dataset")
yourDataset := client.DatasetInProject("your-project-id", "your_dataset")

These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference:

if err := myDataset.Create(ctx, nil); err != nil {
    // TODO: Handle error.
}

You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference to an object in BigQuery that may or may not exist.

table := myDataset.Table("my_table")

You can create, delete and update the metadata of tables with methods on Table. For instance, you could create a temporary table with:

err = myDataset.Table("temp").Create(ctx, &bigquery.TableMetadata{
    ExpirationTime: time.Now().Add(1*time.Hour)})
if err != nil {
    // TODO: Handle error.
}

We'll see how to create a table with a schema in the next section.

Schemas

There are two ways to construct schemas with this package. You can build a schema by hand, like so:

schema1 := bigquery.Schema{
    {Name: "Name", Required: true, Type: bigquery.StringFieldType},
    {Name: "Grades", Repeated: true, Type: bigquery.IntegerFieldType},
    {Name: "Optional", Required: false, Type: bigquery.IntegerFieldType},
}

Or you can infer the schema from a struct:

type student struct {
    Name   string
    Grades []int
    Optional bigquery.NullInt64
}
schema2, err := bigquery.InferSchema(student{})
if err != nil {
    // TODO: Handle error.
}
// schema1 and schema2 are identical.

Struct inference supports tags like those of the encoding/json package, so you can change names, ignore fields, or mark a field as nullable (non-required). Fields declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool, NullTimestamp, NullDate, NullTime, NullDateTime, and NullGeography) are automatically inferred as nullable, so the "nullable" tag is only needed for []byte, *big.Rat and pointer-to-struct fields.

type student2 struct {
    Name     string `bigquery:"full_name"`
    Grades   []int
    Secret   string `bigquery:"-"`
    Optional []byte `bigquery:",nullable"`
}
schema3, err := bigquery.InferSchema(student2{})
if err != nil {
    // TODO: Handle error.
}
// schema3 has required fields "full_name" and "Grade", and nullable BYTES field "Optional".

Having constructed a schema, you can create a table with it like so:

if err := table.Create(ctx, &bigquery.TableMetadata{Schema: schema1}); err != nil {
    // TODO: Handle error.
}

Copying

You can copy one or more tables to another table. Begin by constructing a Copier describing the copy. Then set any desired copy options, and finally call Run to get a Job:

copier := myDataset.Table("dest").CopierFrom(myDataset.Table("src"))
copier.WriteDisposition = bigquery.WriteTruncate
job, err = copier.Run(ctx)
if err != nil {
    // TODO: Handle error.
}

You can chain the call to Run if you don't want to set options:

job, err = myDataset.Table("dest").CopierFrom(myDataset.Table("src")).Run(ctx)
if err != nil {
    // TODO: Handle error.
}

You can wait for your job to complete:

status, err := job.Wait(ctx)
if err != nil {
    // TODO: Handle error.
}

Job.Wait polls with exponential backoff. You can also poll yourself, if you wish:

for {
    status, err := job.Status(ctx)
    if err != nil {
        // TODO: Handle error.
    }
    if status.Done() {
        if status.Err() != nil {
            log.Fatalf("Job failed with error %v", status.Err())
        }
        break
    }
    time.Sleep(pollInterval)
}

Loading and Uploading

There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program.

For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure it as well, and call its Run method.

gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
gcsRef.AllowJaggedRows = true
loader := myDataset.Table("dest").LoaderFrom(gcsRef)
loader.CreateDisposition = bigquery.CreateNever
job, err = loader.Run(ctx)
// Poll the job for completion if desired, as above.

To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Inserter, and call its Put method with a slice of values.

u := table.Inserter()
// Item implements the ValueSaver interface.
items := []*Item{
    {Name: "n1", Size: 32.6, Count: 7},
    {Name: "n2", Size: 4, Count: 2},
    {Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items); err != nil {
    // TODO: Handle error.
}

You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand, or just supply the struct or struct pointer directly and the schema will be inferred:

type Item2 struct {
    Name  string
    Size  float64
    Count int
}
// Item implements the ValueSaver interface.
items2 := []*Item2{
    {Name: "n1", Size: 32.6, Count: 7},
    {Name: "n2", Size: 4, Count: 2},
    {Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items2); err != nil {
    // TODO: Handle error.
}

BigQuery allows for higher throughput when omitting insertion IDs. To enable this, specify the sentinel NoDedupeID value for the insertion ID when implementing a ValueSaver.

Extracting

If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Run method.

extractor := table.ExtractorTo(gcsRef)
extractor.DisableHeader = true
job, err = extractor.Run(ctx)
// Poll the job for completion if desired, as above.

Errors

Errors returned by this client are often of the type googleapi.Error: https://godoc.org/google.golang.org/api/googleapi#Error

These errors can be introspected for more information by using xerrors.As with the richer *googleapi.Error type. For example:

   var e *googleapi.Error
    if ok := xerrors.As(err, &e); ok {
          if e.Code == 409 { ... }
    }

In some cases, your client may received unstructured googleapi.Error error responses. In such cases, it is likely that you have exceeded BigQuery request limits, documented at: https://cloud.google.com/bigquery/quotas

Constants

LogicalStorageBillingModel, PhysicalStorageBillingModel

const (
	// LogicalStorageBillingModel indicates billing for logical bytes.
	LogicalStorageBillingModel = ""

	// PhysicalStorageBillingModel indicates billing for physical bytes.
	PhysicalStorageBillingModel = "PHYSICAL"
)

ScalarFunctionRoutine, ProcedureRoutine, TableValuedFunctionRoutine

const (
	// ScalarFunctionRoutine scalar function routine type
	ScalarFunctionRoutine = "SCALAR_FUNCTION"
	// ProcedureRoutine procedure routine type
	ProcedureRoutine = "PROCEDURE"
	// TableValuedFunctionRoutine routine type for table valued functions
	TableValuedFunctionRoutine = "TABLE_VALUED_FUNCTION"
)

NumericPrecisionDigits, NumericScaleDigits, BigNumericPrecisionDigits, BigNumericScaleDigits

const (
	// NumericPrecisionDigits is the maximum number of digits in a NUMERIC value.
	NumericPrecisionDigits = 38

	// NumericScaleDigits is the maximum number of digits after the decimal point in a NUMERIC value.
	NumericScaleDigits = 9

	// BigNumericPrecisionDigits is the maximum number of full digits in a BIGNUMERIC value.
	BigNumericPrecisionDigits = 76

	// BigNumericScaleDigits is the maximum number of full digits in a BIGNUMERIC value.
	BigNumericScaleDigits = 38
)

DetectProjectID

const DetectProjectID = "*detect-project-id*"

DetectProjectID is a sentinel value that instructs NewClient to detect the project ID. It is given in place of the projectID argument. NewClient will use the project ID from the given credentials or the default credentials (https://developers.google.com/accounts/docs/application-default-credentials) if no credentials were provided. When providing credentials, not all options will allow NewClient to extract the project ID. Specifically a JWT does not have the project ID encoded.

NoDedupeID

const NoDedupeID = "NoDedupeID"

NoDedupeID indicates a streaming insert row wants to opt out of best-effort deduplication. It is EXPERIMENTAL and subject to change or removal without notice.

Scope

const (
	// Scope is the Oauth2 scope for the service.
	// For relevant BigQuery scopes, see:
	// https://developers.google.com/identity/protocols/googlescopes#bigqueryv2
	Scope = "https://www.googleapis.com/auth/bigquery"
)

Variables

NeverExpire

var NeverExpire = time.Time{}.Add(-1)

NeverExpire is a sentinel value used to remove a table'e expiration time.

Functions

func BigNumericString

func BigNumericString(r *big.Rat) string

BigNumericString returns a string representing a *big.Rat in a format compatible with BigQuery SQL. It returns a floating point literal with 38 digits after the decimal point.

func CivilDateTimeString

func CivilDateTimeString(dt civil.DateTime) string

CivilDateTimeString returns a string representing a civil.DateTime in a format compatible with BigQuery SQL. It separate the date and time with a space, and formats the time with CivilTimeString.

Use CivilDateTimeString when using civil.DateTime in DML, for example in INSERT statements.

func CivilTimeString

func CivilTimeString(t civil.Time) string

CivilTimeString returns a string representing a civil.Time in a format compatible with BigQuery SQL. It rounds the time to the nearest microsecond and returns a string with six digits of sub-second precision.

Use CivilTimeString when using civil.Time in DML, for example in INSERT statements.

func IntervalString

func IntervalString(iv *IntervalValue) string

IntervalString returns a string representing an *IntervalValue in a format compatible with BigQuery SQL. It returns an interval literal in canonical format.

func NewArrowIteratorReader

func NewArrowIteratorReader(it ArrowIterator) io.Reader

NewArrowIteratorReader allows to consume an ArrowIterator as an io.Reader. Experimental: this interface is experimental and may be modified or removed in future versions, regardless of any other documented package stability guarantees.

func NumericString

func NumericString(r *big.Rat) string

NumericString returns a string representing a *big.Rat in a format compatible with BigQuery SQL. It returns a floating-point literal with 9 digits after the decimal point.

func Seed

func Seed(s int64)

Seed seeds this package's random number generator, used for generating job and insert IDs. Use Seed to obtain repeatable, deterministic behavior from bigquery clients. Seed should be called before any clients are created.

AccessEntry

type AccessEntry struct {
	Role       AccessRole          // The role of the entity
	EntityType EntityType          // The type of entity
	Entity     string              // The entity (individual or group) granted access
	View       *Table              // The view granted access (EntityType must be ViewEntity)
	Routine    *Routine            // The routine granted access (only UDF currently supported)
	Dataset    *DatasetAccessEntry // The resources within a dataset granted access.
}

An AccessEntry describes the permissions that an entity has on a dataset.

AccessRole

type AccessRole string

AccessRole is the level of access to grant to a dataset.

OwnerRole, ReaderRole, WriterRole

const (
	// OwnerRole is the OWNER AccessRole.
	OwnerRole AccessRole = "OWNER"
	// ReaderRole is the READER AccessRole.
	ReaderRole AccessRole = "READER"
	// WriterRole is the WRITER AccessRole.
	WriterRole AccessRole = "WRITER"
)

ArrowIterator

type ArrowIterator interface {
	Next() (*ArrowRecordBatch, error)
	Schema() Schema
	SerializedArrowSchema() []byte
}

ArrowIterator represents a way to iterate through a stream of arrow records. Experimental: this interface is experimental and may be modified or removed in future versions, regardless of any other documented package stability guarantees.

ArrowRecordBatch

type ArrowRecordBatch struct {

	// Serialized Arrow Record Batch.
	Data []byte
	// Serialized Arrow Schema.
	Schema []byte
	// Source partition ID. In the Storage API world, it represents the ReadStream.
	PartitionID string
	// contains filtered or unexported fields
}

ArrowRecordBatch represents an Arrow RecordBatch with the source PartitionID

func (*ArrowRecordBatch) Read

func (r *ArrowRecordBatch) Read(p []byte) (int, error)

Read makes ArrowRecordBatch implements io.Reader

AvroOptions

type AvroOptions struct {
	// UseAvroLogicalTypes indicates whether to interpret logical types as the
	// corresponding BigQuery data type (for example, TIMESTAMP), instead of using
	// the raw type (for example, INTEGER).
	UseAvroLogicalTypes bool
}

AvroOptions are additional options for Avro external data data sources.

BIEngineReason

type BIEngineReason struct {
	// High-Level BI engine reason for partial or disabled acceleration.
	Code string

	// Human-readable reason for partial or disabled acceleration.
	Message string
}

BIEngineReason contains more detailed information about why a query wasn't fully accelerated.

BIEngineStatistics

type BIEngineStatistics struct {
	// Specifies which mode of BI Engine acceleration was performed.
	BIEngineMode string

	// In case of DISABLED or PARTIAL BIEngineMode, these
	// contain the explanatory reasons as to why BI Engine could not
	// accelerate. In case the full query was accelerated, this field is not
	// populated.
	BIEngineReasons []*BIEngineReason
}

BIEngineStatistics contains query statistics specific to the use of BI Engine.

BigtableColumn

type BigtableColumn struct {
	// Qualifier of the column. Columns in the parent column family that have this
	// exact qualifier are exposed as . field. The column field name is the
	// same as the column qualifier.
	Qualifier string

	// If the qualifier is not a valid BigQuery field identifier i.e. does not match
	// [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field
	// name and is used as field name in queries.
	FieldName string

	// If true, only the latest version of values are exposed for this column.
	// See BigtableColumnFamily.OnlyReadLatest.
	OnlyReadLatest bool

	// The encoding of the values when the type is not STRING.
	// See BigtableColumnFamily.Encoding
	Encoding string

	// The type to convert the value in cells of this column.
	// See BigtableColumnFamily.Type
	Type