BigQuery - Package cloud.google.com/go/bigquery (v1.67.0)

Package bigquery provides a client for the BigQuery service.

The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs.

See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package.

Creating a Client

To start working with this package, create a client with NewClient:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, projectID)
if err != nil {
    // TODO: Handle error.
}

Querying

To query existing tables, create a Client.Query and call its Query.Read method, which starts the query and waits for it to complete:

q := client.Query(`
    SELECT year, SUM(number) as num
    FROM bigquery-public-data.usa_names.usa_1910_2013
    WHERE name = @name
    GROUP BY year
    ORDER BY year
`)
q.Parameters = []bigquery.QueryParameter{
    {Name: "name", Value: "William"},
}
it, err := q.Read(ctx)
if err != nil {
    // TODO: Handle error.
}

Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of Value. A slice is simplest:

for {
    var values []bigquery.Value
    err := it.Next(&values)
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(values)
}

You can also use a struct whose exported fields match the query:

type Count struct {
    Year int
    Num  int
}
for {
    var c Count
    err := it.Next(&c)
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(c)
}

You can also start the query running and get the results later. Create the query as above, but call Query.Run instead of Query.Read. This returns a Job, which represents an asynchronous operation.

job, err := q.Run(ctx)
if err != nil {
    // TODO: Handle error.
}

Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process.

jobID := job.ID()
fmt.Printf("The job ID is %s\n", jobID)

To retrieve the job's results from the ID, first look up the Job with the Client.JobFromID method:

job, err = client.JobFromID(ctx, jobID)
if err != nil {
    // TODO: Handle error.
}

Use the Job.Read method to obtain an iterator, and loop over the rows. Calling Query.Read is preferred for queries with a relatively small result set, as it will call BigQuery jobs.query API for a optimized query path. If the query doesn't meet that criteria, the method will just combine Query.Run and Job.Read.

it, err = job.Read(ctx)
if err != nil {
    // TODO: Handle error.
}
// Proceed with iteration as above.

Datasets and Tables

You can refer to datasets in the client's project with the Client.Dataset method, and in other projects with the Client.DatasetInProject method:

myDataset := client.Dataset("my_dataset")
yourDataset := client.DatasetInProject("your-project-id", "your_dataset")

These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference:

if err := myDataset.Create(ctx, nil); err != nil {
    // TODO: Handle error.
}

You can refer to tables with Dataset.Table. Like Dataset, Table is a reference to an object in BigQuery that may or may not exist.

table := myDataset.Table("my_table")

You can create, delete and update the metadata of tables with methods on Table. For instance, you could create a temporary table with:

err = myDataset.Table("temp").Create(ctx, &bigquery.TableMetadata{
    ExpirationTime: time.Now().Add(1*time.Hour)})
if err != nil {
    // TODO: Handle error.
}

We'll see how to create a table with a schema in the next section.

Schemas

There are two ways to construct schemas with this package. You can build a schema by hand with the Schema struct, like so:

schema1 := bigquery.Schema{
    {Name: "Name", Required: true, Type: bigquery.StringFieldType},
    {Name: "Grades", Repeated: true, Type: bigquery.IntegerFieldType},
    {Name: "Optional", Required: false, Type: bigquery.IntegerFieldType},
}

Or you can infer the schema from a struct with the InferSchema method:

type student struct {
    Name   string
    Grades []int
    Optional bigquery.NullInt64
}
schema2, err := bigquery.InferSchema(student{})
if err != nil {
    // TODO: Handle error.
}
// schema1 and schema2 are identical.

Struct inference supports tags like those of the encoding/json package, so you can change names, ignore fields, or mark a field as nullable (non-required). Fields declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool, NullTimestamp, NullDate, NullTime, NullDateTime, NullGeography, and NullJSON) are automatically inferred as nullable, so the "nullable" tag is only needed for []byte, *big.Rat and pointer-to-struct fields.

type student2 struct {
    Name     string `bigquery:"full_name"`
    Grades   []int
    Secret   string `bigquery:"-"`
    Optional []byte `bigquery:",nullable"`
}
schema3, err := bigquery.InferSchema(student2{})
if err != nil {
    // TODO: Handle error.
}
// schema3 has required fields "full_name" and "Grade", and nullable BYTES field "Optional".

Having constructed a schema, you can create a table with it using the Table.Create method like so:

if err := table.Create(ctx, &bigquery.TableMetadata{Schema: schema1}); err != nil {
    // TODO: Handle error.
}

Copying

You can copy one or more tables to another table. Begin by constructing a Copier describing the copy using the Table.CopierFrom. Then set any desired copy options, and finally call Copier.Run to get a Job:

copier := myDataset.Table("dest").CopierFrom(myDataset.Table("src"))
copier.WriteDisposition = bigquery.WriteTruncate
job, err = copier.Run(ctx)
if err != nil {
    // TODO: Handle error.
}

You can chain the call to Copier.Run if you don't want to set options:

job, err = myDataset.Table("dest").CopierFrom(myDataset.Table("src")).Run(ctx)
if err != nil {
    // TODO: Handle error.
}

You can wait for your job to complete with the Job.Wait method:

status, err := job.Wait(ctx)
if err != nil {
    // TODO: Handle error.
}

Job.Wait polls with exponential backoff. You can also poll yourself, if you wish:

for {
    status, err := job.Status(ctx)
    if err != nil {
        // TODO: Handle error.
    }
    if status.Done() {
        if status.Err() != nil {
            log.Fatalf("Job failed with error %v", status.Err())
        }
        break
    }
    time.Sleep(pollInterval)
}

Loading and Uploading

There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program.

For loading, first create a GCSReference with the NewGCSReference method, configuring it if desired. Then make a Loader from a table with the Table.LoaderFrom method with the reference, optionally configure it as well, and call its Loader.Run method.

gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
gcsRef.AllowJaggedRows = true
loader := myDataset.Table("dest").LoaderFrom(gcsRef)
loader.CreateDisposition = bigquery.CreateNever
job, err = loader.Run(ctx)
// Poll the job for completion if desired, as above.

To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Inserter, and call its Inserter.Put method with a slice of values.

type Item struct {
    Name  string
    Size  float64
    Count int
}

// Save implements the ValueSaver interface.
func (i *Item) Save() (map[string]bigquery.Value, string, error) {
    return map[string]bigquery.Value{
        "Name":  i.Name,
        "Size":  i.Size,
        "Count": i.Count,
    }, "", nil
}

u := table.Inserter()
// Item implements the ValueSaver interface.
items := []*Item{
    {Name: "n1", Size: 32.6, Count: 7},
    {Name: "n2", Size: 4, Count: 2},
    {Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items); err != nil {
    // TODO: Handle error.
}

You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand:

type item struct {
    Name string
    Num  int
}

// Assume schema holds the table's schema.
savers := []*bigquery.StructSaver{
    {Struct: score{Name: "n1", Num: 12}, Schema: schema, InsertID: "id1"},
    {Struct: score{Name: "n2", Num: 31}, Schema: schema, InsertID: "id2"},
    {Struct: score{Name: "n3", Num: 7}, Schema: schema, InsertID: "id3"},
}

if err := u.Put(ctx, savers); err != nil {
    // TODO: Handle error.
}

Lastly, but not least, you can just supply the struct or struct pointer directly and the schema will be inferred:

type Item2 struct {
    Name  string
    Size  float64
    Count int
}

// Item2 doesn't implement ValueSaver interface, so schema will be inferred.
items2 := []*Item2{
    {Name: "n1", Size: 32.6, Count: 7},
    {Name: "n2", Size: 4, Count: 2},
    {Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items2); err != nil {
    // TODO: Handle error.
}

BigQuery allows for higher throughput when omitting insertion IDs. To enable this, specify the sentinel NoDedupeID value for the insertion ID when implementing a ValueSaver.

Extracting

If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Extractor.Run method.

extractor := table.ExtractorTo(gcsRef)
extractor.DisableHeader = true
job, err = extractor.Run(ctx)
// Poll the job for completion if desired, as above.

Errors

Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example:

var e *googleapi.Error
if ok := errors.As(err, &e); ok {
      if e.Code == 409 { ... }
}

In some cases, your client may received unstructured googleapi.Error error responses. In such cases, it is likely that you have exceeded BigQuery request limits, documented at: https://cloud.google.com/bigquery/quotas

Constants

LogicalStorageBillingModel, PhysicalStorageBillingModel

const (
	// LogicalStorageBillingModel indicates billing for logical bytes.
	LogicalStorageBillingModel = ""

	// PhysicalStorageBillingModel indicates billing for physical bytes.
	PhysicalStorageBillingModel = "PHYSICAL"
)

ScalarFunctionRoutine, ProcedureRoutine, TableValuedFunctionRoutine

const (
	// ScalarFunctionRoutine scalar function routine type
	ScalarFunctionRoutine = "SCALAR_FUNCTION"
	// ProcedureRoutine procedure routine type
	ProcedureRoutine = "PROCEDURE"
	// TableValuedFunctionRoutine routine type for table valued functions
	TableValuedFunctionRoutine = "TABLE_VALUED_FUNCTION"
)

NumericPrecisionDigits, NumericScaleDigits, BigNumericPrecisionDigits, BigNumericScaleDigits

const (
	// NumericPrecisionDigits is the maximum number of digits in a NUMERIC value.
	NumericPrecisionDigits = 38

	// NumericScaleDigits is the maximum number of digits after the decimal point in a NUMERIC value.
	NumericScaleDigits = 9

	// BigNumericPrecisionDigits is the maximum number of full digits in a BIGNUMERIC value.
	BigNumericPrecisionDigits = 76

	// BigNumericScaleDigits is the maximum number of full digits in a BIGNUMERIC value.
	BigNumericScaleDigits = 38
)

DetectProjectID

const DetectProjectID = "*detect-project-id*"

DetectProjectID is a sentinel value that instructs [NewClient] to detect the project ID. It is given in place of the projectID argument. [NewClient] will use the project ID from the given credentials or the default credentials (https://developers.google.com/accounts/docs/application-default-credentials) if no credentials were provided. When providing credentials, not all options will allow [NewClient] to extract the project ID. Specifically a JWT does not have the project ID encoded.

NoDedupeID

const NoDedupeID = "NoDedupeID"

NoDedupeID indicates a streaming insert row wants to opt out of best-effort deduplication. It is EXPERIMENTAL and subject to change or removal without notice.

Scope

const (
	// Scope is the Oauth2 scope for the service.
	// For relevant BigQuery scopes, see:
	// https://developers.google.com/identity/protocols/googlescopes#bigqueryv2
	Scope = "https://www.googleapis.com/auth/bigquery"
)

Variables

NeverExpire

var NeverExpire = time.Time{}.Add(-1)

NeverExpire is a sentinel value used to remove a table'e expiration time.

Functions

func BigNumericString

func BigNumericString(r *big.Rat) string

BigNumericString returns a string representing a *big.Rat in a format compatible with BigQuery SQL. It returns a floating point literal with 38 digits after the decimal point.

func CivilDateTimeString

func CivilDateTimeString(dt civil.DateTime) string

CivilDateTimeString returns a string representing a civil.DateTime in a format compatible with BigQuery SQL. It separate the date and time with a space, and formats the time with CivilTimeString.

Use CivilDateTimeString when using civil.DateTime in DML, for example in INSERT statements.

func CivilTimeString

func CivilTimeString(t civil.Time) string

CivilTimeString returns a string representing a civil.Time in a format compatible with BigQuery SQL. It rounds the time to the nearest microsecond and returns a string with six digits of sub-second precision.

Use CivilTimeString when using civil.Time in DML, for example in INSERT statements.

func IntervalString

func IntervalString(iv *IntervalValue) string

IntervalString returns a string representing an *IntervalValue in a format compatible with BigQuery SQL. It returns an interval literal in canonical format.

func NewArrowIteratorReader

func NewArrowIteratorReader(it ArrowIterator) io.Reader

NewArrowIteratorReader allows to consume an ArrowIterator as an io.Reader. Experimental: this interface is experimental and may be modified or removed in future versions, regardless of any other documented package stability guarantees.

func NumericString

func NumericString(r *big.Rat) string

NumericString returns a string representing a *big.Rat in a format compatible with BigQuery SQL. It returns a floating-point literal with 9 digits after the decimal point.

func Seed

func Seed(s int64)

Seed seeds this package's random number generator, used for generating job and insert IDs. Use Seed to obtain repeatable, deterministic behavior from bigquery clients. Seed should be called before any clients are created.

AccessEntry

type AccessEntry struct {
	Role       AccessRole          // The role of the entity
	EntityType EntityType          // The type of entity
	Entity     string              // The entity (individual or group) granted access
	View       *Table              // The view granted access (EntityType must be ViewEntity)
	Routine    *Routine            // The routine granted access (only UDF currently supported)
	Dataset    *DatasetAccessEntry // The resources within a dataset granted access.
	Condition  *Expr               // Condition for the access binding.
}

An AccessEntry describes the permissions that an entity has on a dataset.

AccessRole

type AccessRole string

AccessRole is the level of access to grant to a dataset.

OwnerRole, ReaderRole, WriterRole

const (
	// OwnerRole is the OWNER AccessRole.
	OwnerRole AccessRole = "OWNER"
	// ReaderRole is the READER AccessRole.
	ReaderRole AccessRole = "READER"
	// WriterRole is the WRITER AccessRole.
	WriterRole AccessRole = "WRITER"
)

ArrowIterator

type ArrowIterator interface {
	Next() (*ArrowRecordBatch, error)
	Schema() Schema
	SerializedArrowSchema() []byte
}

ArrowIterator represents a way to iterate through a stream of arrow records. Experimental: this interface is experimental and may be modified or removed in future versions, regardless of any other documented package stability guarantees.

ArrowRecordBatch

type ArrowRecordBatch struct {

	// Serialized Arrow Record Batch.
	Data []byte
	// Serialized Arrow Schema.
	Schema []byte
	// Source partition ID. In the Storage API world, it represents the ReadStream.
	PartitionID string
	// contains filtered or unexported fields
}

ArrowRecordBatch represents an Arrow RecordBatch with the source PartitionID

func (*ArrowRecordBatch) Read

func (r *ArrowRecordBatch) Read(p []byte) (int, error)

Read makes ArrowRecordBatch implements io.Reader

AvroOptions

type AvroOptions struct {
	// UseAvroLogicalTypes indicates whether to interpret logical types as the
	// corresponding BigQuery data type (for example, TIMESTAMP), instead of using
	// the raw type (for example, INTEGER).
	UseAvroLogicalTypes bool
}

AvroOptions are additional options for Avro external data data sources.

BIEngineReason

type BIEngineReason struct {
	// High-Level BI engine reason for partial or disabled acceleration.
	Code string

	// Human-readable reason for partial or disabled acceleration.
	Message string
}

BIEngineReason contains more detailed information about why a query wasn't fully accelerated.

BIEngineStatistics

type BIEngineStatistics struct {
	// Specifies which mode of BI Engine acceleration was performed.
	BIEngineMode string

	// In case of DISABLED or PARTIAL BIEngineMode, these
	// contain the explanatory reasons as to why BI Engine could not
	// accelerate. In case the full query was accelerated, this field is not
	// populated.
	BIEngineReasons []*BIEngineReason
}

BIEngineStatistics contains query statistics specific to the use of BI Engine.

BigtableColumn

type BigtableColumn struct {
	// Qualifier of the column. Columns in the parent column family that have this
	// exact qualifier are exposed as . field. The column field name is the
	// same as the column qualifier.
	Qualifier string

	// If the qualifier is not a valid BigQuery field identifier i.e. does not match
	// [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field
	// name and is used as field name in queries.
	FieldName string

	// If true, only the latest version of values are exposed for this column.
	// See BigtableColumnFamily.OnlyReadLatest.
	OnlyReadLatest bool

	// The encoding of the values when the type is not STRING.
	// See BigtableColumnFamily.Encoding
	Encoding string

	// The type to convert the value in cells of this column.
	// See BigtableColumnFamily.Type
	Type string
}

BigtableColumn describes how BigQuery should access a Bigtable column.

BigtableColumnFamily

type BigtableColumnFamily struct {
	// Identifier of the column family.
	FamilyID string

	// Lists of columns that should be exposed as individual fields as opposed to a
	// list of (column name, value) pairs. All columns whose qualifier matches a
	// qualifier in this list can be accessed as .. Other columns can be accessed as
	// a list through .Column field.
	Columns []*BigtableColumn

	// The encoding of the values when the type is not STRING. Acceptable encoding values are:
	// - TEXT - indicates values are alphanumeric text strings.
	// - BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions.
	// This can be overridden for a specific column by listing that column in 'columns' and
	// specifying an encoding for it.
	Encoding string

	// If true, only the latest version of values are exposed for all columns in this
	// column family. This can be overridden for a specific column by listing that
	// column in 'columns' and specifying a different setting for that column.
	OnlyReadLatest bool

	// The type to convert the value in cells of this
	// column family. The values are expected to be encoded using HBase
	// Bytes.toBytes function when using the BINARY encoding value.
	// Following BigQuery types are allowed (case-sensitive):
	// BYTES STRING INTEGER FLOAT BOOLEAN.
	// The default type is BYTES. This can be overridden for a specific column by
	// listing that column in 'columns' and specifying a type for it.
	Type string
}

BigtableColumnFamily describes how BigQuery should access a Bigtable column family.

BigtableOptions

type BigtableOptions struct {
	// A list of column families to expose in the table schema along with their
	// types. If omitted, all column families are present in the table schema and
	// their values are read as BYTES.
	ColumnFamilies []*BigtableColumnFamily

	// If true, then the column families that are not specified in columnFamilies
	// list are not exposed in the table schema. Otherwise, they are read with BYTES
	// type values. The default is false.
	IgnoreUnspecifiedColumnFamilies bool

	// If true, then the rowkey column families will be read and converted to string.
	// Otherwise they are read with BYTES type values and users need to manually cast
	// them with CAST if necessary. The default is false.
	ReadRowkeyAsString bool
}

BigtableOptions are additional options for Bigtable external data sources.

CSVOptions

type CSVOptions struct {
	// AllowJaggedRows causes missing trailing optional columns to be tolerated
	// when reading CSV data. Missing values are treated as nulls.
	AllowJaggedRows bool

	// AllowQuotedNewlines sets whether quoted data sections containing
	// newlines are allowed when reading CSV data.
	AllowQuotedNewlines bool

	// Encoding is the character encoding of data to be read.
	Encoding Encoding

	// FieldDelimiter is the separator for fields in a CSV file, used when
	// reading or exporting data. The default is ",".
	FieldDelimiter string

	// Quote is the value used to quote data sections in a CSV file. The
	// default quotation character is the double quote ("), which is used if
	// both Quote and ForceZeroQuote are unset.
	// To specify that no character should be interpreted as a quotation
	// character, set ForceZeroQuote to true.
	// Only used when reading data.
	Quote          string
	ForceZeroQuote bool

	// The number of rows at the top of a CSV file that BigQuery will skip when
	// reading data.
	SkipLeadingRows int64

	// An optional custom string that will represent a NULL
	// value in CSV import data.
	NullMarker string

	// Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table,
	// from '\\x00' to '\\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
	PreserveASCIIControlCharacters bool
}

CSVOptions are additional options for CSV external data sources.

Client

type Client struct {
	// Location, if set, will be used as the default location for all subsequent
	// dataset creation and job operations. A location specified directly in one of
	// those operations will override this value.
	Location string
	// contains filtered or unexported fields
}

Client may be used to perform BigQuery operations.

func NewClient

func NewClient(ctx context.Context, projectID string, opts ...option.ClientOption) (*Client, error)

NewClient constructs a new [Client] which can perform BigQuery operations. Operations performed via the client are billed to the specified GCP project.

If the project ID is set to [DetectProjectID], NewClient will attempt to detect the project ID from credentials.

This client supports enabling query-related preview features via environmental variables. By setting the environment variable QUERY_PREVIEW_ENABLED to the string "TRUE", the client will enable preview features, though behavior may still be controlled via the bigquery service as well. Currently, the feature(s) in scope include: short mode queries (query execution without corresponding job metadata).

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	_ = client // TODO: Use client.
}

func (*Client) Close

func (c *Client) Close() error

Close closes any resources held by the client. Close should be called when the client is no longer needed. It need not be called at program exit.

func (*Client) Dataset

func (c *Client) Dataset(id string) *Dataset

Dataset creates a handle to a BigQuery dataset in the client's project.

Example

package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ds := client.Dataset("my_dataset")
	fmt.Println(ds)
}

func (*Client) DatasetInProject

func (c *Client) DatasetInProject(projectID, datasetID string) *Dataset

DatasetInProject creates a handle to a BigQuery dataset in the specified project.

Example

package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ds := client.DatasetInProject("their-project-id", "their-dataset")
	fmt.Println(ds)
}

func (*Client) Datasets

func (c *Client) Datasets(ctx context.Context) *DatasetIterator

Datasets returns an iterator over the datasets in a project. The Client's project is used by default, but that can be changed by setting ProjectID on the returned iterator before calling Next.

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	it := client.Datasets(ctx)
	_ = it // TODO: iterate using Next or iterator.Pager.
}

func (*Client) DatasetsInProject (deprecated)

func (c *Client) DatasetsInProject(ctx context.Context, projectID string) *DatasetIterator

DatasetsInProject returns an iterator over the datasets in the provided project.

Deprecated: call Client.Datasets, then set ProjectID on the returned iterator.

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	it := client.DatasetsInProject(ctx, "their-project-id")
	_ = it // TODO: iterate using Next or iterator.Pager.
}

func (*Client) EnableStorageReadClient

func (c *Client) EnableStorageReadClient(ctx context.Context, opts ...option.ClientOption) error

EnableStorageReadClient sets up Storage API connection to be used when fetching large datasets from tables, jobs or queries. Currently out of pagination methods like PageInfo().Token and RowIterator.StartIndex are not supported when the Storage API is enabled. Calling this method twice will return an error.

func (*Client) JobFromID

func (c *Client) JobFromID(ctx context.Context, id string) (*Job, error)

JobFromID creates a Job which refers to an existing BigQuery job. The job need not have been created by this package. For example, the job may have been created in the BigQuery console.

For jobs whose location is other than "US" or "EU", set Client.Location or use JobFromIDLocation.

Example

package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

func getJobID() string { return "" }

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	jobID := getJobID() // Get a job ID using Job.ID, the console or elsewhere.
	job, err := client.JobFromID(ctx, jobID)
	if err != nil {
		// TODO: Handle error.
	}
	fmt.Println(job.LastStatus()) // Display the job's status.
}

func (*Client) JobFromIDLocation

func (c *Client) JobFromIDLocation(ctx context.Context, id, location string) (j *Job, err error)

JobFromIDLocation creates a Job which refers to an existing BigQuery job. The job need not have been created by this package (for example, it may have been created in the BigQuery console), but it must exist in the specified location.

func (*Client) JobFromProject

func (c *Client) JobFromProject(ctx context.Context, projectID, jobID, location string) (j *Job, err error)

JobFromProject creates a Job which refers to an existing BigQuery job. The job need not have been created by this package, nor does it need to reside within the same project or location as the instantiated client.

func (*Client) Jobs

func (c *Client) Jobs(ctx context.Context) *JobIterator

Jobs lists jobs within a project.

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	it := client.Jobs(ctx)
	it.State = bigquery.Running // list only running jobs.
	_ = it                      // TODO: iterate using Next or iterator.Pager.
}

func (*Client) Project

func (c *Client) Project() string

Project returns the project ID or number for this instance of the client, which may have either been explicitly specified or autodetected.

func (*Client) Query

func (c *Client) Query(q string) *Query

Query creates a query with string q. The returned Query may optionally be further configured before its Run method is called.

Examples

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	q := client.Query("select name, num from t1")
	q.DefaultProjectID = "project-id"
	// TODO: set other options on the Query.
	// TODO: Call Query.Run or Query.Read.
}
encryptionKey
package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	q := client.Query("select name, num from t1")
	// TODO: Replace this key with a key you have created in Cloud KMS.
	keyName := "projects/P/locations/L/keyRings/R/cryptoKeys/K"
	q.DestinationEncryptionConfig = &bigquery.EncryptionConfig{KMSKeyName: keyName}
	// TODO: set other options on the Query.
	// TODO: Call Query.Run or Query.Read.
}
parameters
package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	q := client.Query("select num from t1 where name = @user")
	q.Parameters = []bigquery.QueryParameter{
		{Name: "user", Value: "Elizabeth"},
	}
	// TODO: set other options on the Query.
	// TODO: Call Query.Run or Query.Read.
}

CloneDefinition

type CloneDefinition struct {

	// BaseTableReference describes the ID of the table that this clone
	// came from.
	BaseTableReference *Table

	// CloneTime indicates when the base table was cloned.
	CloneTime time.Time
}

CloneDefinition provides metadata related to the origin of a clone.

Clustering

type Clustering struct {
	Fields []string
}

Clustering governs the organization of data within a managed table. For more information, see https://cloud.google.com/bigquery/docs/clustered-tables

ColumnNameCharacterMap

type ColumnNameCharacterMap string

ColumnNameCharacterMap is used to specific column naming behavior for load jobs.

UnspecifiedColumnNameCharacterMap, StrictColumnNameCharacterMap, V1ColumnNameCharacterMap, V2ColumnNameCharacterMap

var (

	// UnspecifiedColumnNameCharacterMap is the unspecified default value.
	UnspecifiedColumnNameCharacterMap ColumnNameCharacterMap = "COLUMN_NAME_CHARACTER_MAP_UNSPECIFIED"

	// StrictColumnNameCharacterMap indicates support for flexible column names.
	// Invalid column names will be rejected.
	StrictColumnNameCharacterMap ColumnNameCharacterMap = "STRICT"

	// V1ColumnNameCharacterMap indicates support for alphanumeric + underscore characters and names must start with a letter or underscore.
	// Invalid column names will be normalized.
	V1ColumnNameCharacterMap ColumnNameCharacterMap = "V1"

	// V2ColumnNameCharacterMap indicates support for flexible column names.
	// Invalid column names will be normalized.
	V2ColumnNameCharacterMap ColumnNameCharacterMap = "V2"
)

ColumnReference

type ColumnReference struct {
	// ReferencingColumn is the column in the current table that composes the foreign key.
	ReferencingColumn string
	// ReferencedColumn is the column in the primary key of the foreign table that
	// is referenced by the ReferencingColumn.
	ReferencedColumn string
}

ColumnReference represents the pair of the foreign key column and primary key column.

Compression

type Compression string

Compression is the type of compression to apply when writing data to Google Cloud Storage.

None, Gzip, Deflate, Snappy

const (
	// None specifies no compression.
	None Compression = "NONE"
	// Gzip specifies gzip compression.
	Gzip Compression = "GZIP"
	// Deflate specifies DEFLATE compression for Avro files.
	Deflate Compression = "DEFLATE"
	// Snappy specifies SNAPPY compression for Avro files.
	Snappy Compression = "SNAPPY"
)

ConnectionProperty

type ConnectionProperty struct {
	// Name of the connection property to set.
	Key string
	// Value of the connection property.
	Value string
}

ConnectionProperty represents a single key and value pair that can be sent alongside a query request or load job.

Copier

type Copier struct {
	JobIDConfig
	CopyConfig
	// contains filtered or unexported fields
}

A Copier copies data into a BigQuery table from one or more BigQuery tables.

func (*Copier) Run

func (c *Copier) Run(ctx context.Context) (*Job, error)

Run initiates a copy job.

CopyConfig

type CopyConfig struct {
	// Srcs are the tables from which data will be copied.
	Srcs []*Table

	// Dst is the table into which the data will be copied.
	Dst *Table

	// CreateDisposition specifies the circumstances under which the destination table will be created.
	// The default is CreateIfNeeded.
	CreateDisposition TableCreateDisposition

	// WriteDisposition specifies how existing data in the destination table is treated.
	// The default is WriteEmpty.
	WriteDisposition TableWriteDisposition

	// The labels associated with this job.
	Labels map[string]string

	// Custom encryption configuration (e.g., Cloud KMS keys).
	DestinationEncryptionConfig *EncryptionConfig

	// One of the supported operation types when executing a Table Copy jobs.  By default this
	// copies tables, but can also be set to perform snapshot or restore operations.
	OperationType TableCopyOperationType

	// Sets a best-effort deadline on a specific job.  If job execution exceeds this
	// timeout, BigQuery may attempt to cancel this work automatically.
	//
	// This deadline cannot be adjusted or removed once the job is created.  Consider
	// using Job.Cancel in situations where you need more dynamic behavior.
	//
	// Experimental: this option is experimental and may be modified or removed in future versions,
	// regardless of any other documented package stability guarantees.
	JobTimeout time.Duration
}

CopyConfig holds the configuration for a copy job.

DMLStatistics

type DMLStatistics struct {
	// Rows added by the statement.
	InsertedRowCount int64
	// Rows removed by the statement.
	DeletedRowCount int64
	// Rows changed by the statement.
	UpdatedRowCount int64
}

DMLStatistics contains counts of row mutations triggered by a DML query statement.

DataFormat

type DataFormat string

DataFormat describes the format of BigQuery table data.

CSV, Avro, JSON, DatastoreBackup, GoogleSheets, Bigtable, Parquet, ORC, TFSavedModel, XGBoostBooster, Iceberg

const (
	CSV             DataFormat = "CSV"
	Avro            DataFormat = "AVRO"
	JSON            DataFormat = "NEWLINE_DELIMITED_JSON"
	DatastoreBackup DataFormat = "DATASTORE_BACKUP"
	GoogleSheets    DataFormat = "GOOGLE_SHEETS"
	Bigtable        DataFormat = "BIGTABLE"
	Parquet         DataFormat = "PARQUET"
	ORC             DataFormat = "ORC"
	// For BQ ML Models, TensorFlow Saved Model format.
	TFSavedModel DataFormat = "ML_TF_SAVED_MODEL"
	// For BQ ML Models, xgBoost Booster format.
	XGBoostBooster DataFormat = "ML_XGBOOST_BOOSTER"
	Iceberg        DataFormat = "ICEBERG"
)

Constants describing the format of BigQuery table data.

Dataset

type Dataset struct {
	ProjectID string
	DatasetID string
	// contains filtered or unexported fields
}

Dataset is a reference to a BigQuery dataset.

func (*Dataset) Create

func (d *Dataset) Create(ctx context.Context, md *DatasetMetadata) (err error)

Create creates a dataset in the BigQuery service.

An error will be returned if the dataset already exists. Pass in a DatasetMetadata value to configure the dataset.

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ds := client.Dataset("my_dataset")
	if err := ds.Create(ctx, &bigquery.DatasetMetadata{Location: "EU"}); err != nil {
		// TODO: Handle error.
	}
}

func (*Dataset) CreateWithOptions

func (d *Dataset) CreateWithOptions(ctx context.Context, md *DatasetMetadata, opts ...DatasetOption) (err error)

CreateWithOptions creates a dataset in the BigQuery service, and provides additional options to control the behavior of the call.

An error will be returned if the dataset already exists. Pass in a DatasetMetadata value to configure the dataset.

func (*Dataset) Delete

func (d *Dataset) Delete(ctx context.Context) (err error)

Delete deletes the dataset. Delete will fail if the dataset is not empty.

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	if err := client.Dataset("my_dataset").Delete(ctx); err != nil {
		// TODO: Handle error.
	}
}

func (*Dataset) DeleteWithContents

func (d *Dataset) DeleteWithContents(ctx context.Context) (err error)

DeleteWithContents deletes the dataset, as well as contained resources.

func (*Dataset) Identifier

func (d *Dataset) Identifier(f IdentifierFormat) (string, error)

Identifier returns the ID of the dataset in the requested format.

For Standard SQL format, the identifier will be quoted if the ProjectID contains dash (-) characters.

func (*Dataset) Metadata

func (d *Dataset) Metadata(ctx context.Context) (md *DatasetMetadata, err error)

Metadata fetches the metadata for the dataset.

Example

package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	md, err := client.Dataset("my_dataset").Metadata(ctx)
	if err != nil {
		// TODO: Handle error.
	}
	fmt.Println(md)
}

func (*Dataset) MetadataWithOptions

func (d *Dataset) MetadataWithOptions(ctx context.Context, opts ...DatasetOption) (md *DatasetMetadata, err error)

MetadataWithOptions fetches metadata for the dataset, and provides additional options for controlling the request.

func (*Dataset) Model

func (d *Dataset) Model(modelID string) *Model

Model creates a handle to a BigQuery model in the dataset. To determine if a model exists, call Model.Metadata. If the model does not already exist, you can create it via execution of a CREATE MODEL query.

func (*Dataset) Models

func (d *Dataset) Models(ctx context.Context) *ModelIterator

Models returns an iterator over the models in the Dataset.

func (*Dataset) Routine

func (d *Dataset) Routine(routineID string) *Routine

Routine creates a handle to a BigQuery routine in the dataset. To determine if a routine exists, call Routine.Metadata.

func (*Dataset) Routines

func (d *Dataset) Routines(ctx context.Context) *RoutineIterator

Routines returns an iterator over the routines in the Dataset.

func (*Dataset) Table

func (d *Dataset) Table(tableID string) *Table

Table creates a handle to a BigQuery table in the dataset. To determine if a table exists, call Table.Metadata. If the table does not already exist, use Table.Create to create it.

Example

package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	// Table creates a reference to the table. It does not create the actual
	// table in BigQuery; to do so, use Table.Create.
	t := client.Dataset("my_dataset").Table("my_table")
	fmt.Println(t)
}

func (*Dataset) Tables

func (d *Dataset) Tables(ctx context.Context) *TableIterator

Tables returns an iterator over the tables in the Dataset.

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	it := client.Dataset("my_dataset").Tables(ctx)
	_ = it // TODO: iterate using Next or iterator.Pager.
}

func (*Dataset) Update

func (d *Dataset) Update(ctx context.Context, dm DatasetMetadataToUpdate, etag string) (md *DatasetMetadata, err error)

Update modifies specific Dataset metadata fields. To perform a read-modify-write that protects against intervening reads, set the etag argument to the DatasetMetadata.ETag field from the read. Pass the empty string for etag for a "blind write" that will always succeed.

Examples

blindWrite
package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	md, err := client.Dataset("my_dataset").Update(ctx, bigquery.DatasetMetadataToUpdate{Name: "blind"}, "")
	if err != nil {
		// TODO: Handle error.
	}
	fmt.Println(md)
}
readModifyWrite
package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ds := client.Dataset("my_dataset")
	md, err := ds.Metadata(ctx)
	if err != nil {
		// TODO: Handle error.
	}
	md2, err := ds.Update(ctx,
		bigquery.DatasetMetadataToUpdate{Name: "new " + md.Name},
		md.ETag)
	if err != nil {
		// TODO: Handle error.
	}
	fmt.Println(md2)
}

func (*Dataset) UpdateWithOptions

func (d *Dataset) UpdateWithOptions(ctx context.Context, dm DatasetMetadataToUpdate, etag string, opts ...DatasetOption) (md *DatasetMetadata, err error)

UpdateWithOptions modifies specific Dataset metadata fields and provides an interface for specifying additional options to the request.

To perform a read-modify-write that protects against intervening reads, set the etag argument to the DatasetMetadata.ETag field from the read. Pass the empty string for etag for a "blind write" that will always succeed.

DatasetAccessEntry

type DatasetAccessEntry struct {
	// The dataset to which this entry applies.
	Dataset *Dataset
	// The list of target types within the dataset
	// to which this entry applies.
	//
	// Current supported values:
	//
	// VIEWS - This entry applies to views in the dataset.
	TargetTypes []string
}

DatasetAccessEntry is an access entry that refers to resources within another dataset.

DatasetIterator

type DatasetIterator struct {
	// ListHidden causes hidden datasets to be listed when set to true.
	// Set before the first call to Next.
	ListHidden bool

	// Filter restricts the datasets returned by label. The filter syntax is described in
	// https://cloud.google.com/bigquery/docs/labeling-datasets#filtering_datasets_using_labels
	// Set before the first call to Next.
	Filter string

	// The project ID of the listed datasets.
	// Set before the first call to Next.
	ProjectID string
	// contains filtered or unexported fields
}

DatasetIterator iterates over the datasets in a project.

func (*DatasetIterator) Next

func (it *DatasetIterator) Next() (*Dataset, error)

Next returns the next Dataset. Its second return value is iterator.Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

Example

package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
	"google.golang.org/api/iterator"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	it := client.Datasets(ctx)
	for {
		ds, err := it.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			// TODO: Handle error.
		}
		fmt.Println(ds)
	}
}

func (*DatasetIterator) PageInfo

func (it *DatasetIterator) PageInfo() *iterator.PageInfo

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

DatasetMetadata

type DatasetMetadata struct {
	// These fields can be set when creating a dataset.
	Name                    string            // The user-friendly name for this dataset.
	Description             string            // The user-friendly description of this dataset.
	Location                string            // The geo location of the dataset.
	DefaultTableExpiration  time.Duration     // The default expiration time for new tables.
	Labels                  map[string]string // User-provided labels.
	Access                  []*AccessEntry    // Access permissions.
	DefaultEncryptionConfig *EncryptionConfig

	// DefaultPartitionExpiration is the default expiration time for
	// all newly created partitioned tables in the dataset.
	DefaultPartitionExpiration time.Duration

	// Defines the default collation specification of future tables
	// created in the dataset. If a table is created in this dataset without
	// table-level default collation, then the table inherits the dataset default
	// collation, which is applied to the string fields that do not have explicit
	// collation specified. A change to this field affects only tables created
	// afterwards, and does not alter the existing tables.
	// More information: https://cloud.google.com/bigquery/docs/reference/standard-sql/collation-concepts
	DefaultCollation string

	// For externally defined datasets, contains information about the configuration.
	ExternalDatasetReference *ExternalDatasetReference

	// MaxTimeTravel represents the number of hours for the max time travel for all tables
	// in the dataset.  Durations are rounded towards zero for the nearest hourly value.
	MaxTimeTravel time.Duration

	// Storage billing model to be used for all tables in the dataset.
	// Can be set to PHYSICAL. Default is LOGICAL.
	// Once you create a dataset with storage billing model set to physical bytes, you can't change it back to using logical bytes again.
	// More details: https://cloud.google.com/bigquery/docs/datasets-intro#dataset_storage_billing_models
	StorageBillingModel string

	// These fields are read-only.
	CreationTime     time.Time
	LastModifiedTime time.Time // When the dataset or any of its tables were modified.
	FullID           string    // The full dataset ID in the form projectID:datasetID.

	// The tags associated with this dataset. Tag keys are
	// globally unique, and managed via the resource manager API.
	// More information: https://cloud.google.com/resource-manager/docs/tags/tags-overview
	Tags []*DatasetTag

	// TRUE if the dataset and its table names are case-insensitive, otherwise
	// FALSE. By default, this is FALSE, which means the dataset and its table
	// names are case-sensitive. This field does not affect routine references.
	IsCaseInsensitive bool

	// ETag is the ETag obtained when reading metadata. Pass it to Dataset.Update to
	// ensure that the metadata hasn't changed since it was read.
	ETag string
}

DatasetMetadata contains information about a BigQuery dataset.

DatasetMetadataToUpdate

type DatasetMetadataToUpdate struct {
	Description optional.String // The user-friendly description of this table.
	Name        optional.String // The user-friendly name for this dataset.

	// DefaultTableExpiration is the default expiration time for new tables.
	// If set to time.Duration(0), new tables never expire.
	DefaultTableExpiration optional.Duration

	// DefaultTableExpiration is the default expiration time for
	// all newly created partitioned tables.
	// If set to time.Duration(0), new table partitions never expire.
	DefaultPartitionExpiration optional.Duration

	// DefaultEncryptionConfig defines CMEK settings for new resources created
	// in the dataset.
	DefaultEncryptionConfig *EncryptionConfig

	// Defines the default collation specification of future tables
	// created in the dataset.
	DefaultCollation optional.String

	// For externally defined datasets, contains information about the configuration.
	ExternalDatasetReference *ExternalDatasetReference

	// MaxTimeTravel represents the number of hours for the max time travel for all tables
	// in the dataset.  Durations are rounded towards zero for the nearest hourly value.
	MaxTimeTravel optional.Duration

	// Storage billing model to be used for all tables in the dataset.
	// Can be set to PHYSICAL. Default is LOGICAL.
	// Once you change a dataset's storage billing model to use physical bytes, you can't change it back to using logical bytes again.
	// More details: https://cloud.google.com/bigquery/docs/datasets-intro#dataset_storage_billing_models
	StorageBillingModel optional.String

	// The entire access list. It is not possible to replace individual entries.
	Access []*AccessEntry

	// TRUE if the dataset and its table names are case-insensitive, otherwise
	// FALSE. By default, this is FALSE, which means the dataset and its table
	// names are case-sensitive. This field does not affect routine references.
	IsCaseInsensitive optional.Bool
	// contains filtered or unexported fields
}

DatasetMetadataToUpdate is used when updating a dataset's metadata. Only non-nil fields will be updated.

func (*DatasetMetadataToUpdate) DeleteLabel

func (u *DatasetMetadataToUpdate) DeleteLabel(name string)

DeleteLabel causes a label to be deleted on a call to Update.

func (*DatasetMetadataToUpdate) SetLabel

func (u *DatasetMetadataToUpdate) SetLabel(name, value string)

SetLabel causes a label to be added or modified on a call to Update.

DatasetOption

type DatasetOption func(*dsCallOption)

DatasetOption provides an option type for customizing requests against the Dataset service.

func WithAccessPolicyVersion

func WithAccessPolicyVersion(apv int) DatasetOption

WithAccessPolicyVersion is an option that enabled setting of the Access Policy Version for a request where appropriate. Valid values are 0, 1, and 3.

Requests specifying an invalid value will be rejected. Requests for conditional access policy binding in datasets must specify version 3.

Dataset with no conditional role bindings in access policy may specify any valid value or leave the field unset.

This field will be mapped to IAM Policy version and will be used to fetch policy from IAM. If unset or if 0 or 1 value is used for dataset with conditional bindings, access entry with condition will have role string appended by 'withcond' string followed by a hash value.

Please refer https://cloud.google.com/iam/docs/troubleshooting-withcond for more details.

DatasetTag

type DatasetTag struct {
	// TagKey is the namespaced friendly name of the tag key, e.g.
	// "12345/environment" where 12345 is org id.
	TagKey string

	// TagValue is the friendly short name of the tag value, e.g.
	// "production".
	TagValue string
}

DatasetTag is a representation of a single tag key/value.

DecimalTargetType

type DecimalTargetType string

DecimalTargetType is used to express preference ordering for converting values from external formats.

NumericTargetType, BigNumericTargetType, StringTargetType

var (
	// NumericTargetType indicates the preferred type is NUMERIC when supported.
	NumericTargetType DecimalTargetType = "NUMERIC"

	// BigNumericTargetType indicates the preferred type is BIGNUMERIC when supported.
	BigNumericTargetType DecimalTargetType = "BIGNUMERIC"

	// StringTargetType indicates the preferred type is STRING when supported.
	StringTargetType DecimalTargetType = "STRING"
)

Encoding

type Encoding string

Encoding specifies the character encoding of data to be loaded into BigQuery. See https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load.encoding for more details about how this is used.

UTF_8, ISO_8859_1

const (
	// UTF_8 specifies the UTF-8 encoding type.
	UTF_8 Encoding = "UTF-8"
	// ISO_8859_1 specifies the ISO-8859-1 encoding type.
	ISO_8859_1 Encoding = "ISO-8859-1"
)

EncryptionConfig

type EncryptionConfig struct {
	// Describes the Cloud KMS encryption key that will be used to protect
	// destination BigQuery table. The BigQuery Service Account associated with your
	// project requires access to this encryption key.
	KMSKeyName string
}

EncryptionConfig configures customer-managed encryption on tables and ML models.

EntityType

type EntityType int

EntityType is the type of entity in an AccessEntry.

DomainEntity, GroupEmailEntity, UserEmailEntity, SpecialGroupEntity, ViewEntity, IAMMemberEntity, RoutineEntity, DatasetEntity

const (
	// DomainEntity is a domain (e.g. "example.com").
	DomainEntity EntityType = iota + 1

	// GroupEmailEntity is an email address of a Google Group.
	GroupEmailEntity

	// UserEmailEntity is an email address of an individual user.
	UserEmailEntity

	// SpecialGroupEntity is a special group: one of projectOwners, projectReaders, projectWriters or
	// allAuthenticatedUsers.
	SpecialGroupEntity

	// ViewEntity is a BigQuery logical view.
	ViewEntity

	// IAMMemberEntity represents entities present in IAM but not represented using
	// the other entity types.
	IAMMemberEntity

	// RoutineEntity is a BigQuery routine, referencing a User Defined Function (UDF).
	RoutineEntity

	// DatasetEntity is BigQuery dataset, present in the access list.
	DatasetEntity
)

Error

type Error struct {
	// Mirrors bq.ErrorProto, but drops DebugInfo
	Location, Message, Reason string
}

An Error contains detailed information about a failed bigquery operation. Detailed description of possible Reasons can be found here: https://cloud.google.com/bigquery/troubleshooting-errors.

func (Error) Error

func (e Error) Error() string

ExplainQueryStage

type ExplainQueryStage struct {
	// CompletedParallelInputs: Number of parallel input segments completed.
	CompletedParallelInputs int64

	// ComputeAvg: Duration the average shard spent on CPU-bound tasks.
	ComputeAvg time.Duration

	// ComputeMax: Duration the slowest shard spent on CPU-bound tasks.
	ComputeMax time.Duration

	// Relative amount of the total time the average shard spent on CPU-bound tasks.
	ComputeRatioAvg float64

	// Relative amount of the total time the slowest shard spent on CPU-bound tasks.
	ComputeRatioMax float64

	// EndTime: Stage end time.
	EndTime time.Time

	// Unique ID for stage within plan.
	ID int64

	// InputStages: IDs for stages that are inputs to this stage.
	InputStages []int64

	// Human-readable name for stage.
	Name string

	// ParallelInputs: Number of parallel input segments to be processed.
	ParallelInputs int64

	// ReadAvg: Duration the average shard spent reading input.
	ReadAvg time.Duration

	// ReadMax: Duration the slowest shard spent reading input.
	ReadMax time.Duration

	// Relative amount of the total time the average shard spent reading input.
	ReadRatioAvg float64

	// Relative amount of the total time the slowest shard spent reading input.
	ReadRatioMax float64

	// Number of records read into the stage.
	RecordsRead int64

	// Number of records written by the stage.
	RecordsWritten int64

	// ShuffleOutputBytes: Total number of bytes written to shuffle.
	ShuffleOutputBytes int64

	// ShuffleOutputBytesSpilled: Total number of bytes written to shuffle
	// and spilled to disk.
	ShuffleOutputBytesSpilled int64

	// StartTime: Stage start time.
	StartTime time.Time

	// Current status for the stage.
	Status string

	// List of operations within the stage in dependency order (approximately
	// chronological).
	Steps []*ExplainQueryStep

	// WaitAvg: Duration the average shard spent waiting to be scheduled.
	WaitAvg time.Duration

	// WaitMax: Duration the slowest shard spent waiting to be scheduled.
	WaitMax time.Duration

	// Relative amount of the total time the average shard spent waiting to be scheduled.
	WaitRatioAvg float64

	// Relative amount of the total time the slowest shard spent waiting to be scheduled.
	WaitRatioMax float64

	// WriteAvg: Duration the average shard spent on writing output.
	WriteAvg time.Duration

	// WriteMax: Duration the slowest shard spent on writing output.
	WriteMax time.Duration

	// Relative amount of the total time the average shard spent on writing output.
	WriteRatioAvg float64

	// Relative amount of the total time the slowest shard spent on writing output.
	WriteRatioMax float64
}

ExplainQueryStage describes one stage of a query.

ExplainQueryStep

type ExplainQueryStep struct {
	// Machine-readable operation type.
	Kind string

	// Human-readable stage descriptions.
	Substeps []string
}

ExplainQueryStep describes one step of a query stage.

ExportDataStatistics

type ExportDataStatistics struct {
	// Number of destination files generated.
	FileCount int64

	// Number of destination rows generated.
	RowCount int64
}

ExportDataStatistics represents statistics for a EXPORT DATA statement as part of Query Job.

Expr

type Expr struct {
	// Textual representation of an expression in Common Expression Language syntax.
	Expression string

	// Optional. Title for the expression, i.e. a short string describing
	// its purpose. This can be used e.g. in UIs which allow to enter the
	// expression.
	Title string

	// Optional. Description of the expression. This is a longer text which
	// describes the expression, e.g. when hovered over it in a UI.
	Description string

	// Optional. String indicating the location of the expression for error
	// reporting, e.g. a file name and a position in the file.
	Location string
}

Expr represents the conditional information related to dataset access policies.

ExternalData

type ExternalData interface {
	// contains filtered or unexported methods
}

ExternalData is a table which is stored outside of BigQuery. It is implemented by *ExternalDataConfig. GCSReference also implements it, for backwards compatibility.

ExternalDataConfig

type ExternalDataConfig struct {
	// The format of the data. Required.
	SourceFormat DataFormat

	// The fully-qualified URIs that point to your
	// data in Google Cloud. Required.
	//
	// For Google Cloud Storage URIs, each URI can contain one '*' wildcard character
	// and it must come after the 'bucket' name. Size limits related to load jobs
	// apply to external data sources.
	//
	// For Google Cloud Bigtable URIs, exactly one URI can be specified and it has be
	// a fully specified and valid HTTPS URL for a Google Cloud Bigtable table.
	//
	// For Google Cloud Datastore backups, exactly one URI can be specified. Also,
	// the '*' wildcard character is not allowed.
	SourceURIs []string

	// The schema of the data. Required for CSV and JSON; disallowed for the
	// other formats.
	Schema Schema

	// Try to detect schema and format options automatically.
	// Any option specified explicitly will be honored.
	AutoDetect bool

	// The compression type of the data.
	Compression Compression

	// IgnoreUnknownValues causes values not matching the schema to be
	// tolerated. Unknown values are ignored. For CSV this ignores extra values
	// at the end of a line. For JSON this ignores named values that do not
	// match any column name. If this field is not set, records containing
	// unknown values are treated as bad records. The MaxBadRecords field can
	// be used to customize how bad records are handled.
	IgnoreUnknownValues bool

	// MaxBadRecords is the maximum number of bad records that will be ignored
	// when reading data.
	MaxBadRecords int64

	// Additional options for CSV, GoogleSheets, Bigtable, and Parquet formats.
	Options ExternalDataConfigOptions

	// HivePartitioningOptions allows use of Hive partitioning based on the
	// layout of objects in Google Cloud Storage.
	HivePartitioningOptions *HivePartitioningOptions

	// DecimalTargetTypes allows selection of how decimal values are converted when
	// processed in bigquery, subject to the value type having sufficient precision/scale
	// to support the values.  In the order of NUMERIC, BIGNUMERIC, and STRING, a type is
	// selected if is present in the list and if supports the necessary precision and scale.
	//
	// StringTargetType supports all precision and scale values.
	DecimalTargetTypes []DecimalTargetType

	// ConnectionID associates an external data configuration with a connection ID.
	// Connections are managed through the BigQuery Connection API:
	// https://pkg.go.dev/cloud.google.com/go/bigquery/connection/apiv1
	ConnectionID string

	// When creating an external table, the user can provide a reference file with the table schema.
	// This is enabled for the following formats: AVRO, PARQUET, ORC.
	ReferenceFileSchemaURI string

	// Metadata Cache Mode for the table. Set this to
	// enable caching of metadata from external data source.
	MetadataCacheMode MetadataCacheMode
}

ExternalDataConfig describes data external to BigQuery that can be used in queries and to create external tables.

ExternalDataConfigOptions

type ExternalDataConfigOptions interface {
	// contains filtered or unexported methods
}

ExternalDataConfigOptions are additional options for external data configurations. This interface is implemented by CSVOptions, GoogleSheetsOptions and BigtableOptions.

ExternalDatasetReference

type ExternalDatasetReference struct {
	//The connection id that is used to access the external_source.
	// Format: projects/{project_id}/locations/{location_id}/connections/{connection_id}
	Connection string

	// External source that backs this dataset.
	ExternalSource string
}

ExternalDatasetReference provides information about external dataset metadata.

ExtractConfig

type ExtractConfig struct {
	// Src is the table from which data will be extracted.
	// Only one of Src or SrcModel should be specified.
	Src *Table

	// SrcModel is the ML model from which the data will be extracted.
	// Only one of Src or SrcModel should be specified.
	SrcModel *Model

	// Dst is the destination into which the data will be extracted.
	Dst *GCSReference

	// DisableHeader disables the printing of a header row in exported data.
	DisableHeader bool

	// The labels associated with this job.
	Labels map[string]string

	// For Avro-based extracts, controls whether logical type annotations are generated.
	//
	// Example:  With this enabled, writing a BigQuery TIMESTAMP column will result in
	// an integer column annotated with the appropriate timestamp-micros/millis annotation
	// in the resulting Avro files.
	UseAvroLogicalTypes bool

	// Sets a best-effort deadline on a specific job.  If job execution exceeds this
	// timeout, BigQuery may attempt to cancel this work automatically.
	//
	// This deadline cannot be adjusted or removed once the job is created.  Consider
	// using Job.Cancel in situations where you need more dynamic behavior.
	//
	// Experimental: this option is experimental and may be modified or removed in future versions,
	// regardless of any other documented package stability guarantees.
	JobTimeout time.Duration
}

ExtractConfig holds the configuration for an extract job.

ExtractStatistics

type ExtractStatistics struct {
	// The number of files per destination URI or URI pattern specified in the
	// extract configuration. These values will be in the same order as the
	// URIs specified in the 'destinationUris' field.
	DestinationURIFileCounts []int64
}

ExtractStatistics contains statistics about an extract job.

Extractor

type Extractor struct {
	JobIDConfig
	ExtractConfig
	// contains filtered or unexported fields
}

An Extractor extracts data from a BigQuery table into Google Cloud Storage.

func (*Extractor) Run

func (e *Extractor) Run(ctx context.Context) (j *Job, err error)

Run initiates an extract job.

FieldSchema

type FieldSchema struct {
	// The field name.
	// Must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_),
	// and must start with a letter or underscore.
	// The maximum length is 128 characters.
	Name string

	// A description of the field. The maximum length is 16,384 characters.
	Description string

	// Whether the field may contain multiple values.
	Repeated bool
	// Whether the field is required.  Ignored if Repeated is true.
	Required bool

	// The field data type.  If Type is Record, then this field contains a nested schema,
	// which is described by Schema.
	Type FieldType

	// Annotations for enforcing column-level security constraints.
	PolicyTags *PolicyTagList

	// Describes the nested schema if Type is set to Record.
	Schema Schema

	// Maximum length of the field for STRING or BYTES type.
	//
	// It is invalid to set value for types other than STRING or BYTES.
	//
	// For STRING type, this represents the maximum UTF-8 length of strings
	// allowed in the field. For BYTES type, this represents the maximum
	// number of bytes in the field.
	MaxLength int64

	// Precision can be used to constrain the maximum number of
	// total digits allowed for NUMERIC or BIGNUMERIC types.
	//
	// It is invalid to set values for Precision for types other than
	// NUMERIC or BIGNUMERIC.
	//
	// For NUMERIC type, acceptable values for Precision must
	// be: 1 ≤ (Precision - Scale) ≤ 29. Values for Scale
	// must be: 0 ≤ Scale ≤ 9.
	//
	// For BIGNUMERIC type, acceptable values for Precision must
	// be: 1 ≤ (Precision - Scale) ≤ 38. Values for Scale
	// must be: 0 ≤ Scale ≤ 38.
	Precision int64

	// Scale can be used to constrain the maximum number of digits
	// in the fractional part of a NUMERIC or BIGNUMERIC type.
	//
	// If the Scale value is set, the Precision value must be set as well.
	//
	// It is invalid to set values for Scale for types other than
	// NUMERIC or BIGNUMERIC.
	//
	// See the Precision field for additional guidance about valid values.
	Scale int64

	// DefaultValueExpression is used to specify the default value of a field
	// using a SQL expression.  It can only be set for top level fields (columns).
	//
	// You can use struct or array expression to specify default value for the
	// entire struct or array. The valid SQL expressions are:
	//
	// - Literals for all data types, including STRUCT and ARRAY.
	// - The following functions:
	//   - CURRENT_TIMESTAMP
	//   - CURRENT_TIME
	//   - CURRENT_DATE
	//   - CURRENT_DATETIME
	//   - GENERATE_UUID
	//   - RAND
	//   - SESSION_USER
	//   - ST_GEOGPOINT
	//   - Struct or array composed with the above allowed functions, for example:
	//       [CURRENT_DATE(), DATE '2020-01-01']"
	DefaultValueExpression string

	// Collation can be set only when the type of field is STRING.
	// The following values are supported:
	//   - 'und:ci': undetermined locale, case insensitive.
	//   - '': empty string. Default to case-sensitive behavior.
	// More information: https://cloud.google.com/bigquery/docs/reference/standard-sql/collation-concepts
	Collation string

	// Information about the range.
	// If the type is RANGE, this field is required.
	RangeElementType *RangeElementType

	// RoundingMode specifies the rounding mode to be used when storing
	// values of NUMERIC and BIGNUMERIC type.
	// If unspecified, default value is RoundHalfAwayFromZero.
	RoundingMode RoundingMode
}

FieldSchema describes a single field.

FieldType

type FieldType string

FieldType is the type of field.

StringFieldType, BytesFieldType, IntegerFieldType, FloatFieldType, BooleanFieldType, TimestampFieldType, RecordFieldType, DateFieldType, TimeFieldType, DateTimeFieldType, NumericFieldType, GeographyFieldType, BigNumericFieldType, IntervalFieldType, JSONFieldType, RangeFieldType

const (
	// StringFieldType is a string field type.
	StringFieldType FieldType = "STRING"
	// BytesFieldType is a bytes field type.
	BytesFieldType FieldType = "BYTES"
	// IntegerFieldType is a integer field type.
	IntegerFieldType FieldType = "INTEGER"
	// FloatFieldType is a float field type.
	FloatFieldType FieldType = "FLOAT"
	// BooleanFieldType is a boolean field type.
	BooleanFieldType FieldType = "BOOLEAN"
	// TimestampFieldType is a timestamp field type.
	TimestampFieldType FieldType = "TIMESTAMP"
	// RecordFieldType is a record field type. It is typically used to create columns with repeated or nested data.
	RecordFieldType FieldType = "RECORD"
	// DateFieldType is a date field type.
	DateFieldType FieldType = "DATE"
	// TimeFieldType is a time field type.
	TimeFieldType FieldType = "TIME"
	// DateTimeFieldType is a datetime field type.
	DateTimeFieldType FieldType = "DATETIME"
	// NumericFieldType is a numeric field type. Numeric types include integer types, floating point types and the
	// NUMERIC data type.
	NumericFieldType FieldType = "NUMERIC"
	// GeographyFieldType is a string field type.  Geography types represent a set of points
	// on the Earth's surface, represented in Well Known Text (WKT) format.
	GeographyFieldType FieldType = "GEOGRAPHY"
	// BigNumericFieldType is a numeric field type that supports values of larger precision
	// and scale than the NumericFieldType.
	BigNumericFieldType FieldType = "BIGNUMERIC"
	// IntervalFieldType is a representation of a duration or an amount of time.
	IntervalFieldType FieldType = "INTERVAL"
	// JSONFieldType is a representation of a json object.
	JSONFieldType FieldType = "JSON"
	// RangeFieldType represents a continuous range of values.
	RangeFieldType FieldType = "RANGE"
)

FileConfig

type FileConfig struct {
	// SourceFormat is the format of the data to be read.
	// Allowed values are: Avro, CSV, DatastoreBackup, JSON, ORC, and Parquet.  The default is CSV.
	SourceFormat DataFormat

	// Indicates if we should automatically infer the options and
	// schema for CSV and JSON sources.
	AutoDetect bool

	// MaxBadRecords is the maximum number of bad records that will be ignored
	// when reading data.
	MaxBadRecords int64

	// IgnoreUnknownValues causes values not matching the schema to be
	// tolerated. Unknown values are ignored. For CSV this ignores extra values
	// at the end of a line. For JSON this ignores named values that do not
	// match any column name. If this field is not set, records containing
	// unknown values are treated as bad records. The MaxBadRecords field can
	// be used to customize how bad records are handled.
	IgnoreUnknownValues bool

	// Schema describes the data. It is required when reading CSV or JSON data,
	// unless the data is being loaded into a table that already exists.
	Schema Schema

	// Additional options for CSV files.
	CSVOptions

	// Additional options for Parquet files.
	ParquetOptions *ParquetOptions

	// Additional options for Avro files.
	AvroOptions *AvroOptions
}

FileConfig contains configuration options that pertain to files, typically text files that require interpretation to be used as a BigQuery table. A file may live in Google Cloud Storage (see GCSReference), or it may be loaded into a table via the Table.LoaderFromReader.

ForeignKey

type ForeignKey struct {
	// Foreign key constraint name.
	Name string

	// Table that holds the primary key and is referenced by this foreign key.
	ReferencedTable *Table

	// Columns that compose the foreign key.
	ColumnReferences []*ColumnReference
}

ForeignKey represents a foreign key constraint on a table's columns.

GCSReference

type GCSReference struct {
	// URIs refer to Google Cloud Storage objects.
	URIs []string

	FileConfig

	// DestinationFormat is the format to use when writing exported files.
	// Allowed values are: CSV, Avro, JSON.  The default is CSV.
	// CSV is not supported for tables with nested or repeated fields.
	DestinationFormat DataFormat

	// Compression specifies the type of compression to apply when writing data
	// to Google Cloud Storage, or using this GCSReference as an ExternalData
	// source with CSV or JSON SourceFormat. Default is None.
	//
	// Avro files allow additional compression types: DEFLATE and SNAPPY.
	Compression Compression
}

GCSReference is a reference to one or more Google Cloud Storage objects, which together constitute an input or output to a BigQuery operation.

func NewGCSReference

func NewGCSReference(uri string) *GCSReference

NewGCSReference constructs a reference to one or more Google Cloud Storage objects, which together constitute a data source or destination. In the simple case, a single URI in the form gs://bucket/object may refer to a single GCS object. Data may also be split into mutiple files, if multiple URIs or URIs containing wildcards are provided. Each URI may contain one '*' wildcard character, which (if present) must come after the bucket name. For more information about the treatment of wildcards and multiple URIs, see https://cloud.google.com/bigquery/exporting-data-from-bigquery#exportingmultiple

Example

package main

import (
	"fmt"

	"cloud.google.com/go/bigquery"
)

func main() {
	gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
	fmt.Println(gcsRef)
}

GoogleSheetsOptions

type GoogleSheetsOptions struct {
	// The number of rows at the top of a sheet that BigQuery will skip when
	// reading data.
	SkipLeadingRows int64
	// Optionally specifies a more specific range of cells to include.
	// Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id
	//
	// Example: sheet1!A1:B20
	Range string
}

GoogleSheetsOptions are additional options for GoogleSheets external data sources.

HivePartitioningMode

type HivePartitioningMode string

HivePartitioningMode is used in conjunction with HivePartitioningOptions.

AutoHivePartitioningMode, StringHivePartitioningMode, CustomHivePartitioningMode

const (
	// AutoHivePartitioningMode automatically infers partitioning key and types.
	AutoHivePartitioningMode HivePartitioningMode = "AUTO"
	// StringHivePartitioningMode automatically infers partitioning keys and treats values as string.
	StringHivePartitioningMode HivePartitioningMode = "STRINGS"
	// CustomHivePartitioningMode allows custom definition of the external partitioning.
	CustomHivePartitioningMode HivePartitioningMode = "CUSTOM"
)

HivePartitioningOptions

type HivePartitioningOptions struct {

	// Mode defines which hive partitioning mode to use when reading data.
	Mode HivePartitioningMode

	// When hive partition detection is requested, a common prefix for
	// all source uris should be supplied.  The prefix must end immediately
	// before the partition key encoding begins.
	//
	// For example, consider files following this data layout.
	//   gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro
	//   gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro
	//
	// When hive partitioning is requested with either AUTO or STRINGS
	// detection, the common prefix can be either of
	// gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing
	// slash does not matter).
	SourceURIPrefix string

	// If set to true, queries against this external table require
	// a partition filter to be present that can perform partition
	// elimination.  Hive-partitioned load jobs with this field
	// set to true will fail.
	RequirePartitionFilter bool
}

HivePartitioningOptions defines the behavior of Hive partitioning when working with external data.

IdentifierFormat

type IdentifierFormat string

IdentifierFormat represents a how certain resource identifiers such as table references are formatted.

StandardSQLID, LegacySQLID, StorageAPIResourceID, ErrUnknownIdentifierFormat

var (
	// StandardSQLID returns an identifier suitable for use with Standard SQL.
	StandardSQLID IdentifierFormat = "SQL"

	// LegacySQLID returns an identifier suitable for use with Legacy SQL.
	LegacySQLID IdentifierFormat = "LEGACY_SQL"

	// StorageAPIResourceID returns an identifier suitable for use with the Storage API.  Namely, it's for formatting
	// a table resource for invoking read and write functionality.
	StorageAPIResourceID IdentifierFormat = "STORAGE_API_RESOURCE"

	// ErrUnknownIdentifierFormat is indicative of requesting an identifier in a format that is
	// not supported.
	ErrUnknownIdentifierFormat = errors.New("unknown identifier format")
)

Inserter

type Inserter struct {

	// SkipInvalidRows causes rows containing invalid data to be silently
	// ignored. The default value is false, which causes the entire request to
	// fail if there is an attempt to insert an invalid row.
	SkipInvalidRows bool

	// IgnoreUnknownValues causes values not matching the schema to be ignored.
	// The default value is false, which causes records containing such values
	// to be treated as invalid records.
	IgnoreUnknownValues bool

	// A TableTemplateSuffix allows Inserters to create tables automatically.
	//
	// Experimental: this option is experimental and may be modified or removed in future versions,
	// regardless of any other documented package stability guarantees. In general,
	// the BigQuery team recommends the use of partitioned tables over sharding
	// tables by suffix.
	//
	// When you specify a suffix, the table you upload data to
	// will be used as a template for creating a new table, with the same schema,
	// called  + 

An Inserter does streaming inserts into a BigQuery table. It is safe for concurrent use.

func (*Inserter) Put

func (u *Inserter) Put(ctx context.Context, src interface{}) (err error)

Put uploads one or more rows to the BigQuery service.

If src is ValueSaver, then its Save method is called to produce a row for uploading.

If src is a struct or pointer to a struct, then a schema is inferred from it and used to create a StructSaver. The InsertID of the StructSaver will be empty.

If src is a slice of ValueSavers, structs, or struct pointers, then each element of the slice is treated as above, and multiple rows are uploaded.

Put returns a PutMultiError if one or more rows failed to be uploaded. The PutMultiError contains a RowInsertionError for each failed row.

Put will retry on temporary errors (see https://cloud.google.com/bigquery/troubleshooting-errors). This can result in duplicate rows if you do not use insert IDs. Also, if the error persists, the call will run indefinitely. Pass a context with a timeout to prevent hanging calls.

Examples

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

type Item struct {
	Name  string
	Size  float64
	Count int
}

// Save implements the ValueSaver interface.
func (i *Item) Save() (map[string]bigquery.Value, string, error) {
	return map[string]bigquery.Value{
		"Name":  i.Name,
		"Size":  i.Size,
		"Count": i.Count,
	}, "", nil
}

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ins := client.Dataset("my_dataset").Table("my_table").Inserter()
	// Item implements the ValueSaver interface.
	items := []*Item{
		{Name: "n1", Size: 32.6, Count: 7},
		{Name: "n2", Size: 4, Count: 2},
		{Name: "n3", Size: 101.5, Count: 1},
	}
	if err := ins.Put(ctx, items); err != nil {
		// TODO: Handle error.
	}
}
struct
package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ins := client.Dataset("my_dataset").Table("my_table").Inserter()

	type score struct {
		Name string
		Num  int
	}
	scores := []score{
		{Name: "n1", Num: 12},
		{Name: "n2", Num: 31},
		{Name: "n3", Num: 7},
	}
	// Schema is inferred from the score type.
	if err := ins.Put(ctx, scores); err != nil {
		// TODO: Handle error.
	}
}
structSaver
package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

var schema bigquery.Schema

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ins := client.Dataset("my_dataset").Table("my_table").Inserter()

	type score struct {
		Name string
		Num  int
	}

	// Assume schema holds the table's schema.
	savers := []*bigquery.StructSaver{
		{Struct: score{Name: "n1", Num: 12}, Schema: schema, InsertID: "id1"},
		{Struct: score{Name: "n2", Num: 31}, Schema: schema, InsertID: "id2"},
		{Struct: score{Name: "n3", Num: 7}, Schema: schema, InsertID: "id3"},
	}
	if err := ins.Put(ctx, savers); err != nil {
		// TODO: Handle error.
	}
}
valuesSaver
package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

var schema bigquery.Schema

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}

	ins := client.Dataset("my_dataset").Table("my_table").Inserter()

	var vss []*bigquery.ValuesSaver
	for i, name := range []string{"n1", "n2", "n3"} {
		// Assume schema holds the table's schema.
		vss = append(vss, &bigquery.ValuesSaver{
			Schema:   schema,
			InsertID: name,
			Row:      []bigquery.Value{name, int64(i)},
		})
	}

	if err := ins.Put(ctx, vss); err != nil {
		// TODO: Handle error.
	}
}

IntervalValue

type IntervalValue struct {
	// In canonical form, Years and Months share a consistent sign and reduced
	// to avoid large month values.
	Years  int32
	Months int32

	// In canonical form, Days are independent of the other parts and can have it's
	// own sign.  There is no attempt to reduce larger Day values into the Y-M part.
	Days int32

	// In canonical form, the time parts all share a consistent sign and are reduced.
	Hours   int32
	Minutes int32
	Seconds int32
	// This represents the fractional seconds as nanoseconds.
	SubSecondNanos int32
}

IntervalValue is a go type for representing BigQuery INTERVAL values. Intervals are represented using three distinct parts:

  • Years and Months
  • Days
  • Time (Hours/Mins/Seconds/Fractional Seconds).

More information about BigQuery INTERVAL types can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#interval_type

IntervalValue is EXPERIMENTAL and subject to change or removal without notice.

func IntervalValueFromDuration

func IntervalValueFromDuration(in time.Duration) *IntervalValue

IntervalValueFromDuration converts a time.Duration to an IntervalType representation.

The converted duration only leverages the hours/minutes/seconds part of the interval, the other parts representing days, months, and years are not used.

func ParseInterval

func ParseInterval(value string) (*IntervalValue, error)

ParseInterval parses an interval in canonical string format and returns the IntervalValue it represents.

func (*IntervalValue) Canonicalize

func (iv *IntervalValue) Canonicalize() *IntervalValue

Canonicalize returns an IntervalValue where signs for elements in the Y-M and H:M:S.F are consistent and values are normalized/reduced.

Canonical form enables more consistent comparison of the encoded interval. For example, encoding an interval with 12 months is equivalent to an interval of 1 year.

func (*IntervalValue) IsCanonical

func (iv *IntervalValue) IsCanonical() bool

IsCanonical evaluates whether the current representation is in canonical form.

func (*IntervalValue) String

func (iv *IntervalValue) String() string

String returns string representation of the interval value using the canonical format. The canonical format is as follows:

[sign]Y-M [sign]D [sign]H:M:S[.F]

func (*IntervalValue) ToDuration

func (iv *IntervalValue) ToDuration() time.Duration

ToDuration converts an interval to a time.Duration value.

For the purposes of conversion: Years are normalized to 12 months. Months are normalized to 30 days. Days are normalized to 24 hours.

Job

type Job struct {
	// contains filtered or unexported fields
}

A Job represents an operation which has been submitted to BigQuery for processing.

func (*Job) Cancel

func (j *Job) Cancel(ctx context.Context) error

Cancel requests that a job be cancelled. This method returns without waiting for cancellation to take effect. To check whether the job has terminated, use Job.Status. Cancelled jobs may still incur costs.

func (*Job) Children

func (j *Job) Children(ctx context.Context) *JobIterator

Children returns a job iterator for enumerating child jobs of the current job. Currently only scripts, a form of query job, will create child jobs.

func (*Job) Config

func (j *Job) Config() (JobConfig, error)

Config returns the configuration information for j.

Example

package main

import (
	"context"
	"fmt"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ds := client.Dataset("my_dataset")
	job, err := ds.Table("t1").CopierFrom(ds.Table("t2")).Run(ctx)
	if err != nil {
		// TODO: Handle error.
	}
	jc, err := job.Config()
	if err != nil {
		// TODO: Handle error.
	}
	copyConfig := jc.(*bigquery.CopyConfig)
	fmt.Println(copyConfig.Dst, copyConfig.CreateDisposition)
}

func (*Job) Delete

func (j *Job) Delete(ctx context.Context) (err error)

Delete deletes the job.

func (*Job) Email

func (j *Job) Email() string

Email returns the email of the job's creator.

func (*Job) ID

func (j *Job) ID() string

ID returns the job's ID.

func (*Job) LastStatus

func (j *Job) LastStatus() *JobStatus

LastStatus returns the most recently retrieved status of the job. The status is retrieved when a new job is created, or when JobFromID or Job.Status is called. Call Job.Status to get the most up-to-date information about a job.

func (*Job) Location

func (j *Job) Location() string

Location returns the job's location.

func (*Job) ProjectID

func (j *Job) ProjectID() string

ProjectID returns the job's associated project.

func (*Job) Read

func (j *Job) Read(ctx context.Context) (ri *RowIterator, err error)

Read fetches the results of a query job. If j is not a query job, Read returns an error.

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	q := client.Query("select name, num from t1")
	// Call Query.Run to get a Job, then call Read on the job.
	// Note: Query.Read is a shorthand for this.
	job, err := q.Run(ctx)
	if err != nil {
		// TODO: Handle error.
	}
	it, err := job.Read(ctx)
	if err != nil {
		// TODO: Handle error.
	}
	_ = it // TODO: iterate using Next or iterator.Pager.
}

func (*Job) Status

func (j *Job) Status(ctx context.Context) (js *JobStatus, err error)

Status retrieves the current status of the job from BigQuery. It fails if the Status could not be determined.

func (*Job) Wait

func (j *Job) Wait(ctx context.Context) (js *JobStatus, err error)

Wait blocks until the job or the context is done. It returns the final status of the job. If an error occurs while retrieving the status, Wait returns that error. But Wait returns nil if the status was retrieved successfully, even if status.Err() != nil. So callers must check both errors. See the example.

Example

package main

import (
	"context"

	"cloud.google.com/go/bigquery"
)

func main() {
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, "project-id")
	if err != nil {
		// TODO: Handle error.
	}
	ds := client.Dataset("my_dataset")
	job, err := ds.Table("t1").CopierFrom(ds.Table("t2")).Run(ctx)
	if err != nil {
		// TODO: Handle error.
	}
	status, err := job.Wait(ctx)
	if err != nil {
		// TODO: Handle error.
	}
	if status.Err() != nil {
		// TODO: Handle error.
	}
}

JobConfig

type JobConfig interface {
	// contains filtered or unexported methods
}

JobConfig contains configuration information for a job. It is implemented by *CopyConfig, *ExtractConfig, *LoadConfig and *QueryConfig.

JobIDConfig

type JobIDConfig struct {
	// JobID is the ID to use for the job. If empty, a random job ID will be generated.
	JobID string

	// If AddJobIDSuffix is true, then a random string will be appended to JobID.
	AddJobIDSuffix bool

	// Location is the location for the job.
	Location string

	// ProjectID is the Google Cloud project associated with the job.
	ProjectID string
}

JobIDConfig describes how to create an ID for a job.

JobIterator

type JobIterator struct {
	ProjectID       string    // Project ID of the jobs to list. Default is the client's project.
	AllUsers        bool      // Whether to list jobs owned by all users in the project, or just the current caller.
	State           State     // List only jobs in the given state. Defaults to all states.
	MinCreationTime time.Time // List only jobs created after this time.
	MaxCreationTime time.Time // List only jobs created before this time.
	ParentJobID     string    // List only jobs that are children of a given scripting job.
	// contains filtered or unexported fields
}

JobIterator iterates over jobs in a project.

func (*JobIterator) Next

func (it *JobIterator) Next() (*Job, error)

Next returns the next Job. Its second return value is iterator.Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

func (*JobIterator) PageInfo

func (it *JobIterator) PageInfo() *iterator.PageInfo

PageInfo is a getter for the JobIterator's PageInfo.

JobStatistics

type JobStatistics struct {
	CreationTime        time.Time
	StartTime           time.Time
	EndTime             time.Time
	TotalBytesProcessed int64

	Details Statistics

	// NumChildJobs indicates the number of child jobs run as part of a script.
	NumChildJobs int64

	// ParentJobID indicates the origin job for jobs run as part of a script.
	ParentJobID string

	// ScriptStatistics includes information run as part of a child job within
	// a script.
	ScriptStatistics *ScriptStatistics

	// ReservationUsage attributes slot consumption to reservations.
	ReservationUsage []*ReservationUsage

	// TransactionInfo indicates the transaction ID associated with the job, if any.
	TransactionInfo *TransactionInfo

	// SessionInfo contains information about the session if this job is part of one.
	SessionInfo *SessionInfo
}

JobStatistics contains statistics about a job.

JobStatus

type JobStatus struct {
	State State

	// All errors encountered during the running of the job.
	// Not all Errors are fatal, so errors here do not necessarily mean that the job has completed or was unsuccessful.
	Errors []*Error

	// Statistics about the job.
	Statistics *JobStatistics
	// contains filtered or unexported fields
}

JobStatus contains the current State of a job, and errors encountered while processing that job.

func (*JobStatus) Done

func (s *JobStatus) Done() bool

Done reports whether the job has completed. After Done returns true, the Err method will return an error if the job completed unsuccessfully.

func (*JobStatus) Err

func (s *JobStatus) Err() error

Err returns the error that caused the job to complete unsuccessfully (if any).

LoadConfig

type LoadConfig struct {
	// Src is the source from which data will be loaded.
	Src LoadSource

	// Dst is the table into which the data will be loaded.
	Dst *Table

	// CreateDisposition specifies the circumstances under which the destination table will be created.
	// The default is CreateIfNeeded.
	CreateDisposition TableCreateDisposition

	// WriteDisposition specifies how existing data in the destination table is treated.
	// The default is WriteAppend.
	WriteDisposition TableWriteDisposition

	// The labels associated with this job.
	Labels map[string]string

	// If non-nil, the destination table is partitioned by time.
	TimePartitioning *TimePartitioning

	// If non-nil, the destination table is partitioned by integer range.
	RangePartitioning *RangePartitioning

	// Clustering specifies the data clustering configuration for the destination table.
	Clustering *Clustering

	// Custom encryption configuration (e.g., Cloud KMS keys).
	DestinationEncryptionConfig *EncryptionConfig

	// Allows the schema of the destination table to be updated as a side effect of
	// the load job.
	SchemaUpdateOptions []string

	// For Avro-based loads, controls whether logical type annotations are used.
	// See https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-avro#logical_types
	// for additional information.
	UseAvroLogicalTypes bool

	// For ingestion from datastore backups, ProjectionFields governs which fields
	// are projected from the backup.  The default behavior projects all fields.
	ProjectionFields []string

	// HivePartitioningOptions allows use of Hive partitioning based on the
	// layout of objects in Cloud Storage.
	HivePartitioningOptions *HivePartitioningOptions

	// DecimalTargetTypes allows selection of how decimal values are converted when
	// processed in bigquery, subject to the value type having sufficient precision/scale
	// to support the values.  In the order of NUMERIC, BIGNUMERIC, and STRING, a type is
	// selected if is present in the list and if supports the necessary precision and scale.
	//
	// StringTargetType supports all precision and scale values.
	DecimalTargetTypes []DecimalTargetType

	// Sets a best-effort deadline on a specific job.  If job execution exceeds this
	// timeout, BigQuery may attempt to cancel this work automatically.
	//
	// This deadline cannot be adjusted or removed once the job is created.  Consider
	// using Job.Cancel in situations where you need more dynamic behavior.
	//
	// Experimental: this option is experimental and may be modified or removed in future versions,
	// regardless of any other documented package stability guarantees.
	JobTimeout time.Duration

	// When loading a table with external data, the user can provide a reference file with the table schema.
	// This is enabled for the following formats: AVRO, PARQUET, ORC.
	ReferenceFileSchemaURI string

	// If true, creates a new session, where session id will
	// be a server generated random id. If false, runs query with an
	// existing session_id passed in ConnectionProperty, otherwise runs the
	// load job in non-session mode.
	CreateSession bool

	// ConnectionProperties are optional key-values settings.
	ConnectionProperties []*ConnectionProperty

	// MediaOptions stores options for customizing media upload.
	MediaOptions []googleapi.MediaOption

	// Controls the behavior of column naming during a load job.
	// For more information, see:
	// https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#columnnamecharactermap
	ColumnNameCharacterMap ColumnNameCharacterMap
}

LoadConfig holds the configuration for a load job.

LoadSource

type LoadSource interface {
	// contains filtered or unexported methods
}

A LoadSource represents a source of data that can be loaded into a BigQuery table.

This package defines two LoadSources: GCSReference, for Google Cloud Storage objects, and ReaderSource, for data read from an io.Reader.

LoadStatistics

type LoadStatistics struct {
	// The number of bytes of source data in a load job.
	InputFileBytes int64

	// The number of source files in a load job.
	InputFiles int64

	// Size of the loaded data in bytes. Note that while a load job is in the
	// running state, this value may change.
	OutputBytes int64

	// The number of rows imported in a load job. Note that while an import job is
	// in the running state, this value may change.
	OutputRows int64
}

LoadStatistics contains statistics about a load job.

Loader

type Loader struct {
	JobIDConfig
	LoadConfig
	// contains filtered or unexported fields
}

A Loader loads data from Google Cloud Storage into a BigQuery table.

func (*Loader) Run

func (l *Loader) Run(ctx context.Context) (j *Job, err error)

Run initiates a load job.

MaterializedViewDefinition

type MaterializedViewDefinition struct {
	// EnableRefresh governs whether the derived view is updated to reflect
	// changes in the base table.
	EnableRefresh bool

	// LastRefreshTime reports the time, in millisecond precision, that the
	// materialized view was last updated.
	LastRefreshTime time.Time

	// Query contains the SQL query used to define the materialized view.
	Query string

	// RefreshInterval defines the maximum frequency, in millisecond precision,
	// at which this this materialized view will be refreshed.
	RefreshInterval time.Duration

	// AllowNonIncrementalDefinition for materialized view definition.
	// The default value is false.
	AllowNonIncrementalDefinition bool

	// MaxStaleness of data that could be returned when materialized
	// view is queried.
	//
	// Deprecated: use Table level MaxStaleness.
	MaxStaleness *IntervalValue
}

MaterializedViewDefinition contains information for materialized views.

MetadataCacheMode

type MetadataCacheMode string

MetadataCacheMode describes the types of metadata cache mode for external data.

Automatic, Manual

const (
	// Automatic metadata cache mode triggers automatic background refresh of
	// metadata cache from the external source. Queries will use the latest
	// available cache version within the table's maxStaleness interval.
	Automatic MetadataCacheMode = "AUTOMATIC"
	// Manual metadata cache mode triggers manual refresh of the
	// metadata cache from external source. Queries will use the latest manually
	// triggered cache version within the table's maxStaleness interval.
	Manual MetadataCacheMode = "MANUAL"
)

Constants describing types of metadata cache mode for external data.

Model

type Model struct {
	ProjectID string
	DatasetID string
	// ModelID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_).
	// The maximum length is 1,024 characters.
	ModelID string
	// contains filtered or unexported fields
}

Model represent a reference to a BigQuery ML model. Within the API, models are used largely for communicating statistical information about a given model, as creation of models is only supported via BigQuery queries (e.g. CREATE MODEL .. AS ..).

For more info, see documentation for Bigquery ML, see: https://cloud.google.com/bigquery/docs/bigqueryml

func (*Model) Delete

func (m *Model) Delete(ctx context.Context) (err error)

Delete deletes an ML model.

func (*Model) ExtractorTo

func (m *Model) ExtractorTo(dst *GCSReference) *Extractor

ExtractorTo returns an Extractor which can be persist a BigQuery Model into Google Cloud Storage. The returned Extractor may be further configured before its Run method is called.

func (*Model) FullyQualifiedName

func (m *Model) FullyQualifiedName() string

FullyQualifiedName returns the ID of the model in projectID:datasetID.modelid format.

func (*Model) Identifier

func (m *Model) Identifier(f IdentifierFormat) (string, error)

Identifier returns the ID of the model in the requested format.

For Standard SQL format, the identifier will be quoted if the ProjectID contains dash (-) characters.

func (*Model) Metadata

func (m *Model) Metadata(ctx context.Context) (mm *ModelMetadata, err error)

Metadata fetches the metadata for a model, which includes ML training statistics.

func (*Model) Update

func (m *Model) Update(ctx context.Context, mm ModelMetadataToUpdate, etag string) (md *ModelMetadata, err error)

Update updates mutable fields in an ML model.

ModelIterator

type ModelIterator struct {
	// contains filtered or unexported fields
}

A ModelIterator is an iterator over Models.

func (*ModelIterator) Next

func (it *ModelIterator) Next() (*Model, error)

Next returns the next result. Its second return value is Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

func (*ModelIterator) PageInfo

func (it *ModelIterator) PageInfo() *iterator.PageInfo

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

ModelMetadata

type ModelMetadata struct {
	// The user-friendly description of the model.
	Description string

	// The user-friendly name of the model.
	Name string

	// The type of the model.  Possible values include:
	// "LINEAR_REGRESSION" - a linear regression model
	// "LOGISTIC_REGRESSION" - a logistic regression model
	// "KMEANS" - a k-means clustering model
	Type string

	// The creation time of the model.
	CreationTime time.Time

	// The last modified time of the model.
	LastModifiedTime time.Time

	// The expiration time of the model.
	ExpirationTime time.Time

	// The geographic location where the model resides.  This value is
	// inherited from the encapsulating dataset.
	Location string

	// Custom encryption configuration (e.g., Cloud KMS keys).
	EncryptionConfig *EncryptionConfig

	Labels map[string]string

	// ETag is the ETag obtained when reading metadata. Pass it to Model.Update
	// to ensure that the metadata hasn't changed since it was read.
	ETag string
	// contains filtered or unexported fields
}

ModelMetadata represents information about a BigQuery ML model.

func (*ModelMetadata) RawFeatureColumns

func (mm *ModelMetadata) RawFeatureColumns() ([]*StandardSQLField, error)

RawFeatureColumns exposes the underlying feature columns used to train an ML model and uses types from "google.golang.org/api/bigquery/v2", which are subject to change without warning. It is EXPERIMENTAL and subject to change or removal without notice.

func (*ModelMetadata) RawLabelColumns

func (mm *ModelMetadata) RawLabelColumns() ([]*StandardSQLField, error)

RawLabelColumns exposes the underlying label columns used to train an ML model and uses types from "google.golang.org/api/bigquery/v2", which are subject to change without warning. It is EXPERIMENTAL and subject to change or removal without notice.

func (*ModelMetadata) RawTrainingRuns

func (mm *ModelMetadata) RawTrainingRuns() []*TrainingRun

RawTrainingRuns exposes the underlying training run stats for a model using types from "google.golang.org/api/bigquery/v2", which are subject to change without warning. It is EXPERIMENTAL and subject to change or removal without notice.

ModelMetadataToUpdate

type ModelMetadataToUpdate struct {
	// The user-friendly description of this model.
	Description optional.String

	// The user-friendly name of this model.
	Name optional.String

	// The time when this model expires.  To remove a model's expiration,
	// set ExpirationTime to NeverExpire.  The zero value is ignored.
	ExpirationTime time.Time

	// The model's encryption configuration.
	EncryptionConfig *EncryptionConfig
	// contains filtered or unexported fields
}

ModelMetadataToUpdate is used when updating an ML model's metadata. Only non-nil fields will be updated.

func (*ModelMetadataToUpdate) DeleteLabel

func (u *ModelMetadataToUpdate) DeleteLabel(name string)

DeleteLabel causes a label to be deleted on a call to Update.

func (*ModelMetadataToUpdate) SetLabel

func (u *ModelMetadataToUpdate) SetLabel(name, value string)

SetLabel causes a label to be added or modified on a call to Update.

MultiError

type MultiError []error

A MultiError contains multiple related errors.

func (MultiError) Error

func (m MultiError) Error() string

NullBool

type NullBool struct {
	Bool  bool
	Valid bool // Valid is true if Bool is not NULL.
}

NullBool represents a BigQuery BOOL that may be NULL.

func (NullBool) MarshalJSON

func (n NullBool) MarshalJSON() ([]byte, error)

MarshalJSON converts the NullBool to JSON.

func (NullBool) String

func (n NullBool) String() string

func (*NullBool) UnmarshalJSON

func (n *NullBool) UnmarshalJSON(b []byte) error

UnmarshalJSON converts JSON into a NullBool.

NullDate

type NullDate struct {
	Date  civil.Date
	Valid bool // Valid is true if Date is not NULL.
}

NullDate represents a BigQuery DATE that may be null.

func (NullDate) MarshalJSON

func (n NullDate) MarshalJSON() ([]byte, error)

MarshalJSON converts the NullDate to JSON.

func (NullDate) String

func (n NullDate) String() string

func (*NullDate) UnmarshalJSON

func (n *NullDate) UnmarshalJSON(b []byte) error

UnmarshalJSON converts JSON into a NullDate.

NullDateTime

type NullDateTime struct {
	DateTime civil.DateTime
	Valid    bool // Valid is true if DateTime is not NULL.
}

NullDateTime represents a BigQuery DATETIME that may be null.

func (NullDateTime) MarshalJSON

func (n NullDateTime) MarshalJSON() ([]byte, error)

MarshalJSON converts the NullDateTime to JSON.

func (NullDateTime) String

func (n NullDateTime) String() string

func (*NullDateTime) UnmarshalJSON

func (n *NullDateTime) UnmarshalJSON(b []byte) error

UnmarshalJSON converts JSON into a NullDateTime.

NullFloat64

type NullFloat64 struct {
	Float64 float64
	Valid   bool // Valid is true if Float64 is not NULL.
}

NullFloat64 represents a BigQuery FLOAT64 that may be NULL.

func (NullFloat64) MarshalJSON

func (n NullFloat64) MarshalJSON() (b []byte, err error)

MarshalJSON converts the NullFloat64 to JSON.

func (NullFloat64) String

func (n NullFloat64) String() string

func (*NullFloat64) UnmarshalJSON

func (n *NullFloat64) UnmarshalJSON(b []byte) error

UnmarshalJSON converts JSON into a NullFloat64.

NullGeography

type NullGeography struct {
	GeographyVal string
	Valid        bool // Valid is true if GeographyVal is not NULL.
}

NullGeography represents a BigQuery GEOGRAPHY string that may be NULL.

func (NullGeography) MarshalJSON

func (n NullGeography) MarshalJSON() ([]byte, error)

MarshalJSON converts the NullGeography to JSON.

func (NullGeography) String

func (n NullGeography) String() string

func (*NullGeography) UnmarshalJSON

func (n *NullGeography) UnmarshalJSON(b []byte) error

UnmarshalJSON converts JSON into a NullGeography.

NullInt64

type NullInt64 struct {
	Int64 int64
	Valid bool // Valid is true if Int64 is not NULL.
}

NullInt64 represents a BigQuery INT64 that may be NULL.

func (NullInt64) MarshalJSON

func (n NullInt64) MarshalJSON() ([]byte, error)

MarshalJSON converts the NullInt64 to JSON.

func (NullInt64) String

func (n NullInt64) String() string

func (*NullInt64) UnmarshalJSON

func (n *NullInt64) UnmarshalJSON(b []byte) error

UnmarshalJSON converts JSON into a NullInt64.

NullJSON

type NullJSON struct {
	JSONVal string
	Valid   bool // Valid is true if JSONVal is not NULL.
}

NullJSON represents a BigQuery JSON string that may be NULL.

func (NullJSON) MarshalJSON

func (n NullJSON) MarshalJSON() ([]byte, error)

MarshalJSON converts the NullJSON to JSON.

func (NullJSON) String

func (n NullJSON) String() string

func (*NullJSON) UnmarshalJSON

func (n *NullJSON) UnmarshalJSON(b []byte) error

UnmarshalJSON converts JSON into a NullJSON.

NullString

type NullString struct {
	StringVal string
	Valid     bool // Valid is true if StringVal is not NULL.
}

NullString represents a BigQuery STRING that may be NULL.

func (NullString) MarshalJSON

func (n NullString) MarshalJSON() ([]byte, error)

MarshalJSON converts the NullString to JSON.

func (NullString) String

func (n NullString) String() string

func (*NullString) UnmarshalJSON

func (n *NullString) UnmarshalJSON(b []byte) error

UnmarshalJSON converts JSON into a NullString.

NullTime

type NullTime struct {
	Time  civil.Time
	Valid