Cloud Spanner - Package cloud.google.com/go/spanner (v1.36.0)

Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances.

See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API.

See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package.

Creating a Client

To start working with this package, create a client that refers to the database of interest:

ctx := context.Background()
client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")
if err != nil {
    // TODO: Handle error.
}
defer client.Close()

Remember to close the client after use to free up the sessions in the session pool.

To use an emulator with this library, you can set the SPANNER_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Spanner. You can then create and use a client as usual:

// Set SPANNER_EMULATOR_HOST environment variable.
err := os.Setenv("SPANNER_EMULATOR_HOST", "localhost:9010")
if err != nil {
    // TODO: Handle error.
}
// Create client as usual.
client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")
if err != nil {
    // TODO: Handle error.
}

Simple Reads and Writes

Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back:

_, err := client.Apply(ctx, []*spanner.Mutation{
    spanner.Insert("Users",
        []string{"name", "email"},
        []interface{}{"alice", "[email protected]"})})
if err != nil {
    // TODO: Handle error.
}
row, err := client.Single().ReadRow(ctx, "Users",
    spanner.Key{"alice"}, []string{"email"})
if err != nil {
    // TODO: Handle error.
}

All the methods used above are discussed in more detail below.

Keys

Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key:

key1 := spanner.Key{"alice"}

KeyRanges

The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type:

kr1 := spanner.KeyRange{Start: key1, End: key2}

By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions:

// include both keys
kr2 := spanner.KeyRange{Start: key1, End: key2, Kind: spanner.ClosedClosed}

KeySets

A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets:

ks1 := spanner.KeySets(key1, key2, kr1, kr2)

AllKeys returns a KeySet that refers to all the keys in a table:

ks2 := spanner.AllKeys()

Transactions

All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state.

Single Reads

The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method.

When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read:

row, err := client.Single().ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})

Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns:

iter := client.Single().Read(ctx, "Accounts", keyset1, columns)

Read returns a RowIterator. You can call the Do method on the iterator and pass a callback:

err := iter.Do(func(row *Row) error {
   // TODO: use row
   return nil
})

RowIterator also follows the standard pattern for the Google Cloud Client Libraries:

defer iter.Stop()
for {
    row, err := iter.Next()
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    // TODO: use row
}

Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.)

To read rows with an index, use ReadUsingIndex.

Statements

The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map:

stmt := spanner.NewStatement("SELECT First, Last FROM SINGERS WHERE Last >= @start")
stmt.Params["start"] = "Dylan"

You can also construct a Statement directly with a struct literal, providing your own map of parameters.

Use the Query method to run the statement and obtain an iterator:

iter := client.Single().Query(ctx, stmt)

Rows

Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value.

You can extract by column position or name:

err := row.Column(0, &name)
err = row.ColumnByName("balance", &balance)

You can extract all the columns at once:

err = row.Columns(&name, &balance)

Or you can define a Go struct that corresponds to your columns, and extract into that:

var s struct { Name string; Balance int64 }
err = row.ToStruct(&s)

For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString:

var ns spanner.NullString
if err := row.Column(0, &ns); err != nil {
    // TODO: Handle error.
}
if ns.Valid {
    fmt.Println(ns.StringVal)
} else {
    fmt.Println("column is NULL")
}

Multiple Reads

To perform more than one read in a transaction, use ReadOnlyTransaction:

txn := client.ReadOnlyTransaction()
defer txn.Close()
iter := txn.Query(ctx, stmt1)
// ...
iter =  txn.Query(ctx, stmt2)
// ...

You must call Close when you are done with the transaction.

Timestamps and Timestamp Bounds

Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp.

By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use

client.Single().
    WithTimestampBound(spanner.MaxStaleness(1*time.Minute)).
    Query(ctx, stmt)

See the documentation of TimestampBound for more details.

Mutations

To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties.

One takes lists of columns and values along with the table name:

m1 := spanner.Insert("Users",
    []string{"name", "email"},
    []interface{}{"alice", "[email protected]"})

One takes a map from column names to values:

m2 := spanner.InsertMap("Users", map[string]interface{}{
    "name":  "alice",
    "email": "[email protected]",
})

And the third accepts a struct value, and determines the columns from the struct field names:

type User struct { Name, Email string }
u := User{Name: "alice", Email: "[email protected]"}
m3, err := spanner.InsertStruct("Users", u)

Writes

To apply a list of mutations to the database, use Apply:

_, err := client.Apply(ctx, []*spanner.Mutation{m1, m2, m3})

If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may be aborted automatically by the backend and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction:

_, err := client.ReadWriteTransaction(ctx, func(ctx context.Context, txn *spanner.ReadWriteTransaction) error {
    var balance int64
    row, err := txn.ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})
    if err != nil {
        // The transaction function will be called again if the error code
        // of this error is Aborted. The backend may automatically abort
        // any read/write transaction if it detects a deadlock or other
        // problems.
        return err
    }
    if err := row.Column(0, &balance); err != nil {
        return err
    }

    if balance <= 10 {
        return errors.New("insufficient funds in account")
    }
    balance -= 10
    m := spanner.Update("Accounts", []string{"user", "balance"}, []interface{}{"alice", balance})
    // The buffered mutation will be committed.  If the commit
    // fails with an Aborted error, this function will be called
    // again.
    return txn.BufferWrite([]*spanner.Mutation{m})
})

Structs

Cloud Spanner STRUCT (aka STRUCT) values (https://cloud.google.com/spanner/docs/data-types#struct-type) can be represented by a Go struct value.

A proto StructType is built from the field types and field tag information of the Go struct. If a field in the struct type definition has a "spanner:<field_name>" tag, then the value of the "spanner" key in the tag is used as the name for that field in the built StructType, otherwise the field name in the struct definition is used. To specify a field with an empty field name in a Cloud Spanner STRUCT type, use the spanner:&quot;&quot; tag annotation against the corresponding field in the Go struct's type definition.

A STRUCT value can contain STRUCT-typed and Array-of-STRUCT typed fields and these can be specified using named struct-typed and []struct-typed fields inside a Go struct. However, embedded struct fields are not allowed. Unexported struct fields are ignored.

NULL STRUCT values in Cloud Spanner are typed. A nil pointer to a Go struct value can be used to specify a NULL STRUCT value of the corresponding StructType. Nil and empty slices of a Go STRUCT type can be used to specify NULL and empty array values respectively of the corresponding StructType. A slice of pointers to a Go struct type can be used to specify an array of NULL-able STRUCT values.

DML and Partitioned DML

Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.)

For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned.

Tracing

This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.

Constants

Scope, AdminScope

const (
	// Scope is the scope for Cloud Spanner Data API.
	Scope = "https://www.googleapis.com/auth/spanner.data"

	// AdminScope is the scope for Cloud Spanner Admin APIs.
	AdminScope = "https://www.googleapis.com/auth/spanner.admin"
)

NumericPrecisionDigits, NumericScaleDigits

const (

	// NumericPrecisionDigits is the maximum number of digits in a NUMERIC
	// value.
	NumericPrecisionDigits = 38

	// NumericScaleDigits is the maximum number of digits after the decimal
	// point in a NUMERIC value.
	NumericScaleDigits = 9
)

Variables

OpenSessionCount, OpenSessionCountView, MaxAllowedSessionsCount, MaxAllowedSessionsCountView, SessionsCount, SessionsCountView, MaxInUseSessionsCount, MaxInUseSessionsCountView, GetSessionTimeoutsCount, GetSessionTimeoutsCountView, AcquiredSessionsCount, AcquiredSessionsCountView, ReleasedSessionsCount, ReleasedSessionsCountView, GFELatency, GFELatencyView, GFEHeaderMissingCount, GFEHeaderMissingCountView

var (
	OpenSessionCount = stats.Int64(
		statsPrefix+"open_session_count",
		"Number of sessions currently opened",
		stats.UnitDimensionless,
	)

	OpenSessionCountView = &view.View{
		Measure:     OpenSessionCount,
		Aggregation: view.LastValue(),
		TagKeys:     tagCommonKeys,
	}

	MaxAllowedSessionsCount = stats.Int64(
		statsPrefix+"max_allowed_sessions",
		"The maximum number of sessions allowed. Configurable by the user.",
		stats.UnitDimensionless,
	)

	MaxAllowedSessionsCountView = &view.View{
		Measure:     MaxAllowedSessionsCount,
		Aggregation: view.LastValue(),
		TagKeys:     tagCommonKeys,
	}

	SessionsCount = stats.Int64(
		statsPrefix+"num_sessions_in_pool",
		"The number of sessions currently in use.",
		stats.UnitDimensionless,
	)

	SessionsCountView = &view.View{
		Measure:     SessionsCount,
		Aggregation: view.LastValue(),
		TagKeys:     append(tagCommonKeys, tagKeyType),
	}

	MaxInUseSessionsCount = stats.Int64(
		statsPrefix+"max_in_use_sessions",
		"The maximum number of sessions in use during the last 10 minute interval.",
		stats.UnitDimensionless,
	)

	MaxInUseSessionsCountView = &view.View{
		Measure:     MaxInUseSessionsCount,
		Aggregation: view.LastValue(),
		TagKeys:     tagCommonKeys,
	}

	GetSessionTimeoutsCount = stats.Int64(
		statsPrefix+"get_session_timeouts",
		"The number of get sessions timeouts due to pool exhaustion.",
		stats.UnitDimensionless,
	)

	GetSessionTimeoutsCountView = &view.View{
		Measure:     GetSessionTimeoutsCount,
		Aggregation: view.Count(),
		TagKeys:     tagCommonKeys,
	}

	AcquiredSessionsCount = stats.Int64(
		statsPrefix+"num_acquired_sessions",
		"The number of sessions acquired from the session pool.",
		stats.UnitDimensionless,
	)

	AcquiredSessionsCountView = &view.View{
		Measure:     AcquiredSessionsCount,
		Aggregation: view.Count(),
		TagKeys:     tagCommonKeys,
	}

	ReleasedSessionsCount = stats.Int64(
		statsPrefix+"num_released_sessions",
		"The number of sessions released by the user and pool maintainer.",
		stats.UnitDimensionless,
	)

	ReleasedSessionsCountView = &view.View{
		Measure:     ReleasedSessionsCount,
		Aggregation: view.Count(),
		TagKeys:     tagCommonKeys,
	}

	GFELatency = stats.Int64(
		statsPrefix+"gfe_latency",
		"Latency between Google's network receiving an RPC and reading back the first byte of the response",
		stats.UnitMilliseconds,
	)

	GFELatencyView = &view.View{
		Name:        "cloud.google.com/go/spanner/gfe_latency",
		Measure:     GFELatency,
		Description: "Latency between Google's network receives an RPC and reads back the first byte of the response",
		Aggregation: view.Distribution(0.0, 0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 10.0, 13.0,
			16.0, 20.0, 25.0, 30.0, 40.0, 50.0, 65.0, 80.0, 100.0, 130.0, 160.0, 200.0, 250.0,
			300.0, 400.0, 500.0, 650.0, 800.0, 1000.0, 2000.0, 5000.0, 10000.0, 20000.0, 50000.0,
			100000.0),
		TagKeys: append(tagCommonKeys, tagKeyMethod),
	}

	GFEHeaderMissingCount = stats.Int64(
		statsPrefix+"gfe_header_missing_count",
		"" /* 128 byte string literal not displayed */,
		stats.UnitDimensionless,
	)

	GFEHeaderMissingCountView = &view.View{
		Name:        "cloud.google.com/go/spanner/gfe_header_missing_count",
		Measure:     GFEHeaderMissingCount,
		Description: "" /* 128 byte string literal not displayed */,
		Aggregation: view.Count(),
		TagKeys:     append(tagCommonKeys, tagKeyMethod),
	}
)

CommitTimestamp

var (
	// CommitTimestamp is a special value used to tell Cloud Spanner to insert
	// the commit timestamp of the transaction into a column. It can be used in
	// a Mutation, or directly used in InsertStruct or InsertMap. See
	// ExampleCommitTimestamp. This is just a placeholder and the actual value
	// stored in this variable has no meaning.
	CommitTimestamp = commitTimestamp
)

DefaultRetryBackoff

var DefaultRetryBackoff = gax.Backoff{
	Initial:    20 * time.Millisecond,
	Max:        32 * time.Second,
	Multiplier: 1.3,
}

DefaultRetryBackoff is used for retryers as a fallback value when the server did not return any retry information.

DefaultSessionPoolConfig

var DefaultSessionPoolConfig = SessionPoolConfig{
	MinOpened: 100,
	MaxOpened: numChannels * 100,
	MaxBurst:  10,

	WriteSessions:       0.2,
	HealthCheckWorkers:  10,
	HealthCheckInterval: healthCheckIntervalMins * time.Minute,
	// contains filtered or unexported fields
}

DefaultSessionPoolConfig is the default configuration for the session pool that will be used for a Spanner client, unless the user supplies a specific session pool config.

Functions

func DisableGfeLatencyAndHeaderMissingCountViews

func DisableGfeLatencyAndHeaderMissingCountViews()

DisableGfeLatencyAndHeaderMissingCountViews disables GFEHeaderMissingCount and GFELatency metric

func EnableGfeHeaderMissingCountView

func EnableGfeHeaderMissingCountView() error

EnableGfeHeaderMissingCountView enables GFEHeaderMissingCount metric

func EnableGfeLatencyAndHeaderMissingCountViews

func EnableGfeLatencyAndHeaderMissingCountViews() error

EnableGfeLatencyAndHeaderMissingCountViews enables GFEHeaderMissingCount and GFELatency metric

func EnableGfeLatencyView

func EnableGfeLatencyView() error

EnableGfeLatencyView enables GFELatency metric

func EnableStatViews

func EnableStatViews() error

EnableStatViews enables all views of metrics relate to session management.

func ErrCode

func ErrCode(err error) codes.Code

ErrCode extracts the canonical error code from a Go error.

func ErrDesc

func ErrDesc(err error) string

ErrDesc extracts the Cloud Spanner error description from a Go error.

func ExtractRetryDelay

func ExtractRetryDelay(err error) (time.Duration, bool)

ExtractRetryDelay extracts retry backoff from a *spanner.Error if present.

func NumericString

func NumericString(r *big.Rat) string

NumericString returns a string representing a *big.Rat in a format compatible with Spanner SQL. It returns a floating-point literal with 9 digits after the decimal point.

func ToSpannerError

func ToSpannerError(err error) error

ToSpannerError converts a general Go error to *spanner.Error. If the given error is already a *spanner.Error, the original error will be returned.

Spanner Errors are normally created by the Spanner client library from the returned APIError of a RPC. This method can also be used to create Spanner errors for use in tests. The recommended way to create test errors is calling this method with a status error, e.g. ToSpannerError(status.New(codes.NotFound, "Table not found").Err())

ApplyOption

type ApplyOption func(*applyOption)

An ApplyOption is an optional argument to Apply.

func ApplyAtLeastOnce

func ApplyAtLeastOnce() ApplyOption

ApplyAtLeastOnce returns an ApplyOption that removes replay protection.

With this option, Apply may attempt to apply mutations more than once; if the mutations are not idempotent, this may lead to a failure being reported when the mutation was applied more than once. For example, an insert may fail with ALREADY_EXISTS even though the row did not exist before Apply was called. For this reason, most users of the library will prefer not to use this option. However, ApplyAtLeastOnce requires only a single RPC, whereas Apply's default replay protection may require an additional RPC. So this option may be appropriate for latency sensitive and/or high throughput blind writing.

func Priority

func Priority(priority sppb.RequestOptions_Priority) ApplyOption

Priority returns an ApplyOptions that sets the RPC priority to use for the commit operation.

func TransactionTag

func TransactionTag(tag string) ApplyOption

TransactionTag returns an ApplyOption that will include the given tag as a transaction tag for a write-only transaction.

BatchReadOnlyTransaction

type BatchReadOnlyTransaction struct {
	ReadOnlyTransaction
	ID BatchReadOnlyTransactionID
}

BatchReadOnlyTransaction is a ReadOnlyTransaction that allows for exporting arbitrarily large amounts of data from Cloud Spanner databases. BatchReadOnlyTransaction partitions a read/query request. Read/query request can then be executed independently over each partition while observing the same snapshot of the database. BatchReadOnlyTransaction can also be shared across multiple clients by passing around the BatchReadOnlyTransactionID and then recreating the transaction using Client.BatchReadOnlyTransactionFromID.

Note: if a client is used only to run partitions, you can create it using a ClientConfig with both MinOpened and MaxIdle set to zero to avoid creating unnecessary sessions. You can also avoid excess gRPC channels by setting ClientConfig.NumChannels to the number of concurrently active BatchReadOnlyTransactions you expect to have.

func (*BatchReadOnlyTransaction) AnalyzeQuery

func (t *BatchReadOnlyTransaction) AnalyzeQuery(ctx context.Context, statement Statement) (*sppb.QueryPlan, error)

AnalyzeQuery returns the query plan for statement.

func (*BatchReadOnlyTransaction) Cleanup

func (t *BatchReadOnlyTransaction) Cleanup(ctx context.Context)

Cleanup cleans up all the resources used by this transaction and makes it unusable. Once this method is invoked, the transaction is no longer usable anywhere, including other clients/processes with which this transaction was shared.

Calling Cleanup is optional, but recommended. If Cleanup is not called, the transaction's resources will be freed when the session expires on the backend and is deleted. For more information about recycled sessions, see https://cloud.google.com/spanner/docs/sessions.

func (*BatchReadOnlyTransaction) Close

func (t *BatchReadOnlyTransaction) Close()

Close marks the txn as closed.

func (*BatchReadOnlyTransaction) Execute

Execute runs a single Partition obtained from PartitionRead or PartitionQuery.

func (*BatchReadOnlyTransaction) PartitionQuery

func (t *BatchReadOnlyTransaction) PartitionQuery(ctx context.Context, statement Statement, opt PartitionOptions) ([]*Partition, error)

PartitionQuery returns a list of Partitions that can be used to execute a query against the database.

func (*BatchReadOnlyTransaction) PartitionQueryWithOptions

func (t *BatchReadOnlyTransaction) PartitionQueryWithOptions(ctx context.Context, statement Statement, opt PartitionOptions, qOpts QueryOptions) ([]*Partition, error)

PartitionQueryWithOptions returns a list of Partitions that can be used to execute a query against the database. The sql query execution will be optimized based on the given query options.

func (*BatchReadOnlyTransaction) PartitionRead

func (t *BatchReadOnlyTransaction) PartitionRead(ctx context.Context, table string, keys KeySet, columns []string, opt PartitionOptions) ([]*Partition, error)

PartitionRead returns a list of Partitions that can be used to read rows from the database. These partitions can be executed across multiple processes, even across different machines. The partition size and count hints can be configured using PartitionOptions.

func (*BatchReadOnlyTransaction) PartitionReadUsingIndex

func (t *BatchReadOnlyTransaction) PartitionReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string, opt PartitionOptions) ([]*Partition, error)

PartitionReadUsingIndex returns a list of Partitions that can be used to read rows from the database using an index.

func (*BatchReadOnlyTransaction) PartitionReadUsingIndexWithOptions

func (t *BatchReadOnlyTransaction) PartitionReadUsingIndexWithOptions(ctx context.Context, table, index string, keys KeySet, columns []string, opt PartitionOptions, readOptions ReadOptions) ([]*Partition, error)

PartitionReadUsingIndexWithOptions returns a list of Partitions that can be used to read rows from the database using an index. Pass a ReadOptions to modify the read operation.

func (*BatchReadOnlyTransaction) PartitionReadWithOptions

func (t *BatchReadOnlyTransaction) PartitionReadWithOptions(ctx context.Context, table string, keys KeySet, columns []string, opt PartitionOptions, readOptions ReadOptions) ([]*Partition, error)

PartitionReadWithOptions returns a list of Partitions that can be used to read rows from the database. These partitions can be executed across multiple processes, even across different machines. The partition size and count hints can be configured using PartitionOptions. Pass a ReadOptions to modify the read operation.

func (*BatchReadOnlyTransaction) Query

func (t *BatchReadOnlyTransaction) Query(ctx context.Context, statement Statement) *RowIterator

Query executes a query against the database. It returns a RowIterator for retrieving the resulting rows.

Query returns only row data, without a query plan or execution statistics. Use QueryWithStats to get rows along with the plan and statistics. Use AnalyzeQuery to get just the plan.

func (*BatchReadOnlyTransaction) QueryWithOptions

func (t *BatchReadOnlyTransaction) QueryWithOptions(ctx context.Context, statement Statement, opts QueryOptions) *RowIterator

QueryWithOptions executes a SQL statment against the database. It returns a RowIterator for retrieving the resulting rows. The sql query execution will be optimized based on the given query options.

func (*BatchReadOnlyTransaction) QueryWithStats

func (t *BatchReadOnlyTransaction) QueryWithStats(ctx context.Context, statement Statement) *RowIterator

QueryWithStats executes a SQL statement against the database. It returns a RowIterator for retrieving the resulting rows. The RowIterator will also be populated with a query plan and execution statistics.

func (*BatchReadOnlyTransaction) Read

func (t *BatchReadOnlyTransaction) Read(ctx context.Context, table string, keys KeySet, columns []string) *RowIterator

Read returns a RowIterator for reading multiple rows from the database.

func (*BatchReadOnlyTransaction) ReadRow

func (t *BatchReadOnlyTransaction) ReadRow(ctx context.Context, table string, key Key, columns []string) (*Row, error)

ReadRow reads a single row from the database.

If no row is present with the given key, then ReadRow returns an error where spanner.ErrCode(err) is codes.NotFound.

func (*BatchReadOnlyTransaction) ReadRowUsingIndex

func (t *BatchReadOnlyTransaction) ReadRowUsingIndex(ctx context.Context, table string, index string, key Key, columns []string) (*Row, error)

ReadRowUsingIndex reads a single row from the database using an index.

If no row is present with the given index, then ReadRowUsingIndex returns an error where spanner.ErrCode(err) is codes.NotFound.

If more than one row received with the given index, then ReadRowUsingIndex returns an error where spanner.ErrCode(err) is codes.FailedPrecondition.

func (*BatchReadOnlyTransaction) ReadRowWithOptions

func (t *BatchReadOnlyTransaction) ReadRowWithOptions(ctx context.Context, table string, key Key, columns []string, opts *ReadOptions) (*Row, error)

ReadRowWithOptions reads a single row from the database. Pass a ReadOptions to modify the read operation.

If no row is present with the given key, then ReadRowWithOptions returns an error where spanner.ErrCode(err) is codes.NotFound.

func (*BatchReadOnlyTransaction) ReadUsingIndex

func (t *BatchReadOnlyTransaction) ReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string) (ri *RowIterator)

ReadUsingIndex calls ReadWithOptions with ReadOptions{Index: index}.

func (*BatchReadOnlyTransaction) ReadWithOptions

func (t *BatchReadOnlyTransaction) ReadWithOptions(ctx context.Context, table string, keys KeySet, columns []string, opts *ReadOptions) (ri *RowIterator)

ReadWithOptions returns a RowIterator for reading multiple rows from the database. Pass a ReadOptions to modify the read operation.

BatchReadOnlyTransactionID

type BatchReadOnlyTransactionID struct {
	// contains filtered or unexported fields
}

BatchReadOnlyTransactionID is a unique identifier for a BatchReadOnlyTransaction. It can be used to re-create a BatchReadOnlyTransaction on a different machine or process by calling Client.BatchReadOnlyTransactionFromID.

func (BatchReadOnlyTransactionID) MarshalBinary

func (tid BatchReadOnlyTransactionID) MarshalBinary() (data []byte, err error)

MarshalBinary implements BinaryMarshaler.

func (*BatchReadOnlyTransactionID) UnmarshalBinary

func (tid *BatchReadOnlyTransactionID) UnmarshalBinary(data []byte) error

UnmarshalBinary implements BinaryUnmarshaler.

Client

type Client struct {
	// contains filtered or unexported fields
}

Client is a client for reading and writing data to a Cloud Spanner database. A client is safe to use concurrently, except for its Close method.

func NewClient

func NewClient(ctx context.Context, database string, opts ...option.ClientOption) (*Client, error)

NewClient creates a client to a database. A valid database name has the form projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID. It uses a default configuration.

Example

package main

import (
	"context"

	"cloud.google.com/go/spanner"
)

func main() {
	ctx := context.Background()
	const myDB = "projects/my-project/instances/my-instance/database/my-db"
	client, err := spanner.NewClient(ctx, myDB)
	if err != nil {
		// TODO: Handle error.
	}
	_ = client // TODO: Use client.
}

func NewClientWithConfig

func NewClientWithConfig(ctx context.Context, database string, config ClientConfig, opts ...option.ClientOption) (c *Client, err error)

NewClientWithConfig creates a client to a database. A valid database name has the form projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID.

Example

package main

import (
	"context"

	"cloud.google.com/go/spanner"
)

func main() {
	ctx := context.Background()
	const myDB = "projects/my-project/instances/my-instance/database/my-db"
	client, err := spanner.NewClientWithConfig(ctx, myDB, spanner.ClientConfig{
		NumChannels: 10,
	})
	if err != nil {
		// TODO: Handle error.
	}
	_ = client     // TODO: Use client.
	client.Close() // Close client when done.
}

func (*Client) Apply

func (c *Client) Apply(ctx context.Context, ms []*Mutation, opts ...ApplyOption) (commitTimestamp time.Time, err error)

Apply applies a list of mutations atomically to the database.

Example

package main

import (
	"context"

	"cloud.google.com/go/spanner"
)

const myDB = "projects/my-project/instances/my-instance/database/my-db"

func main() {
	ctx := context.Background()
	client, err := spanner.NewClient(ctx, myDB)
	if err != nil {
		// TODO: Handle error.
	}
	m := spanner.Update("Users", []string{"name", "email"}, []interface{}{"alice", "[email protected]"})
	_, err = client.Apply(ctx, []*spanner.Mutation{m})
	if err != nil {
		// TODO: Handle error.
	}
}

func (*Client) BatchReadOnlyTransaction

func (c *Client) BatchReadOnlyTransaction(ctx context.Context, tb TimestampBound) (*BatchReadOnlyTransaction, error)

BatchReadOnlyTransaction returns a BatchReadOnlyTransaction that can be used for partitioned reads or queries from a snapshot of the database. This is useful in batch processing pipelines where one wants to divide the work of reading from the database across multiple machines.

Note: This transaction does not use the underlying session pool but creates a new session each time, and the session is reused across clients.

You should call Close() after the txn is no longer needed on local client, and call Cleanup() when the txn is finished for all clients, to free the session.

Example

package main

import (
	"context"
	"sync"

	"cloud.google.com/go/spanner"
	"google.golang.org/api/iterator"
)

const myDB = "projects/my-project/instances/my-instance/database/my-db"

func main() {
	ctx := context.Background()
	var (
		client *spanner.Client
		txn    *spanner.BatchReadOnlyTransaction
		err    error
	)
	if client, err = spanner.NewClient(ctx, myDB); err != nil {
		// TODO: Handle error.
	}
	defer client.Close()
	if txn, err = client.BatchReadOnlyTransaction(ctx, spanner.StrongRead()); err != nil {
		// TODO: Handle error.
	}
	defer txn.Close()

	// Singer represents the elements in a row from the Singers table.
	type Singer struct {
		SingerID   int64
		FirstName  string
		LastName   string
		SingerInfo []byte
	}
	stmt := spanner.Statement{SQL: "SELECT * FROM Singers;"}
	partitions, err := txn.PartitionQuery(ctx, stmt, spanner.PartitionOptions{})
	if err != nil {
		// TODO: Handle error.
	}
	// Note: here we use multiple goroutines, but you should use separate
	// processes/machines.
	wg := sync.WaitGroup{}
	for i, p := range partitions {
		wg.Add(1)
		go func(i int, p *spanner.Partition) {
			defer wg.Done()
			iter := txn.Execute(ctx, p)
			defer iter.Stop()
			for {
				row, err := iter.Next()
				if err == iterator.Done {
					break
				} else if err != nil {
					// TODO: Handle error.
				}
				var s Singer
				if err := row.ToStruct(&s); err != nil {
					// TODO: Handle error.
				}
				_ = s // TODO: Process the row.
			}
		}(i, p)
	}
	wg.Wait()
}

func (*Client) BatchReadOnlyTransactionFromID

func (c *Client) BatchReadOnlyTransactionFromID(tid BatchReadOnlyTransactionID) *BatchReadOnlyTransaction

BatchReadOnlyTransactionFromID reconstruct a BatchReadOnlyTransaction from BatchReadOnlyTransactionID

func (*Client) Close

func (c *Client) Close()

Close closes the client.

func (*Client) DatabaseName

func (c *Client) DatabaseName() string

DatabaseName returns the full name of a database, e.g., "projects/spanner-cloud-test/instances/foo/databases/foodb".

func (*Client) PartitionedUpdate

func (c *Client) PartitionedUpdate(ctx context.Context, statement Statement) (count int64, err error)

PartitionedUpdate executes a DML statement in parallel across the database, using separate, internal transactions that commit independently. The DML statement must be fully partitionable: it must be expressible as the union of many statements each of which accesses only a single row of the table. The statement should also be idempotent, because it may be applied more than once.

PartitionedUpdate returns an estimated count of the number of rows affected. The actual number of affected rows may be greater than the estimate.

func (*Client) PartitionedUpdateWithOptions

func (c *Client) PartitionedUpdateWithOptions(ctx context.Context, statement Statement, opts QueryOptions) (count int64, err error)

PartitionedUpdateWithOptions executes a DML statement in parallel across the database, using separate, internal transactions that commit independently. The sql query execution will be optimized based on the given query options.

func (*Client) ReadOnlyTransaction

func (c *Client) ReadOnlyTransaction() *ReadOnlyTransaction

ReadOnlyTransaction returns a ReadOnlyTransaction that can be used for multiple reads from the database. You must call Close() when the ReadOnlyTransaction is no longer needed to release resources on the server.

ReadOnlyTransaction will use a strong TimestampBound by default. Use ReadOnlyTransaction.WithTimestampBound to specify a different TimestampBound. A non-strong bound can be used to reduce latency, or "time-travel" to prior versions of the database, see the documentation of TimestampBound for details.

Example

package main

import (
	"context"

	"cloud.google.com/go/spanner"
)

const myDB = "projects/my-project/instances/my-instance/database/my-db"

func main() {
	ctx := context.Background()
	client, err := spanner.NewClient(ctx, myDB)
	if err != nil {
		// TODO: Handle error.
	}
	t := client.ReadOnlyTransaction()
	defer t.Close()
	// TODO: Read with t using Read, ReadRow, ReadUsingIndex, or Query.
}

func (*Client) ReadWriteTransaction

func (c *Client) ReadWriteTransaction(ctx context.Context, f func(context.Context, *ReadWriteTransaction) error) (commitTimestamp time.Time, err error)

ReadWriteTransaction executes a read-write transaction, with retries as necessary.

The function f will be called one or more times. It must not maintain any state between calls.

If the transaction cannot be committed or if f returns an ABORTED error, ReadWriteTransaction will call f again. It will continue to call f until the transaction can be committed or the Context times out or is cancelled. If f returns an error other than ABORTED, ReadWriteTransaction will abort the transaction and return the error.

To limit the number of retries, set a deadline on the Context rather than using a fixed limit on the number of attempts. ReadWriteTransaction will retry as needed until that deadline is met.

See https://godoc.org/cloud.google.com/go/spanner#ReadWriteTransaction for more details.

Example

package main

import (
	"context"
	"errors"

	"cloud.google.com/go/spanner"
)

const myDB = "projects/my-project/instances/my-instance/database/my-db"

func main() {
	ctx := context.Background()
	client, err := spanner.NewClient(ctx, myDB)
	if err != nil {
		// TODO: Handle error.
	}
	_, err = client.ReadWriteTransaction(ctx, func(ctx context.Context, txn *spanner.ReadWriteTransaction) error {
		var balance int64
		row, err := txn.ReadRow(ctx, "Accounts", spanner.Key{"alice"}, []string{"balance"})
		if err != nil {
			// This function will be called again if this is an IsAborted error.
			return err
		}
		if err := row.Column(0, &balance); err != nil {
			return err
		}

		if balance <= 10 {
			return errors.New("insufficient funds in account")
		}
		balance -= 10
		m := spanner.Update("Accounts", []string{"user", "balance"}, []interface{}{"alice", balance})
		// The buffered mutation will be committed. If the commit fails with an
		// IsAborted error, this function will be called again.
		return txn.BufferWrite([]*spanner.Mutation{m})
	})
	if err != nil {
		// TODO: Handle error.
	}
}

func (*Client) ReadWriteTransactionWithOptions

func (c *Client) ReadWriteTransactionWithOptions(ctx context.Context, f func(context.Context, *ReadWriteTransaction) error, options TransactionOptions) (resp CommitResponse, err error)

ReadWriteTransactionWithOptions executes a read-write transaction with configurable options, with retries as necessary.

ReadWriteTransactionWithOptions is a configurable ReadWriteTransaction.

See https://godoc.org/cloud.google.com/go/spanner#ReadWriteTransaction for more details.

func (*Client) Single

func (c *Client) Single() *ReadOnlyTransaction

Single provides a read-only snapshot transaction optimized for the case where only a single read or query is needed. This is more efficient than using ReadOnlyTransaction() for a single read or query.

Single will use a strong TimestampBound by default. Use ReadOnlyTransaction.WithTimestampBound to specify a different TimestampBound. A non-strong bound can be used to reduce latency, or "time-travel" to prior versions of the database, see the documentation of TimestampBound for details.

Example

package main

import (
	"context"

	"cloud.google.com/go/spanner"
)

const myDB = "projects/my-project/instances/my-instance/database/my-db"

func main() {
	ctx := context.Background()
	client, err := spanner.NewClient(ctx, myDB)
	if err != nil {
		// TODO: Handle error.
	}
	iter := client.Single().Query(ctx, spanner.NewStatement("SELECT FirstName FROM Singers"))
	_ = iter // TODO: iterate using Next or Do.
}

ClientConfig

type ClientConfig struct {
	// NumChannels is the number of gRPC channels.
	// If zero, a reasonable default is used based on the execution environment.
	//
	// Deprecated: The Spanner client now uses a pool of gRPC connections. Use
	// option.WithGRPCConnectionPool(numConns) instead to specify the number of
	// connections the client should use. The client will default to a
	// reasonable default if this option is not specified.
	NumChannels int

	// SessionPoolConfig is the configuration for session pool.
	SessionPoolConfig

	// SessionLabels for the sessions created by this client.
	// See https://cloud.google.com/spanner/docs/reference/rpc/google.spanner.v1#session
	// for more info.
	SessionLabels map[string]string

	// QueryOptions is the configuration for executing a sql query.
	QueryOptions QueryOptions

	// CallOptions is the configuration for providing custom retry settings that
	// override the default values.
	CallOptions *vkit.CallOptions
	// contains filtered or unexported fields
}

ClientConfig has configurations for the client.

CommitOptions

type CommitOptions struct {
	ReturnCommitStats bool
}

CommitOptions provides options for commiting a transaction in a database.

CommitResponse

type CommitResponse struct {
	// CommitTs is the commit time for a transaction.
	CommitTs time.Time
	// CommitStats is the commit statistics for a transaction.
	CommitStats *sppb.CommitResponse_CommitStats
}

CommitResponse provides a response of a transaction commit in a database.

Decoder

type Decoder interface {
	DecodeSpanner(input interface{}) error
}

Decoder is the interface implemented by a custom type that can be decoded from a supported type by Spanner. A code example:

type customField struct { Prefix string Suffix string }

// Convert a string to a customField value func (cf *customField) DecodeSpanner(val interface{}) (err error) { strVal, ok := val.(string) if !ok { return fmt.Errorf("failed to decode customField: %v", val) } s := strings.Split(strVal, "-") if len(s) > 1 { cf.Prefix = s[0] cf.Suffix = s[1] } return nil }

Encoder

type Encoder interface {
	EncodeSpanner() (interface{}, error)
}

Encoder is the interface implemented by a custom type that can be encoded to a supported type by Spanner. A code example:

type customField struct { Prefix string Suffix string }

// Convert a customField value to a string func (cf customField) EncodeSpanner() (interface{}, error) { var b bytes.Buffer b.WriteString(cf.Prefix) b.WriteString("-") b.WriteString(cf.Suffix) return b.String(), nil }

Error (deprecated)

type Error struct {
	// Code is the canonical error code for describing the nature of a
	// particular error.
	//
	// Deprecated: The error code should be extracted from the wrapped error by
	// calling ErrCode(err error). This field will be removed in a future
	// release.
	Code codes.Code

	// Desc explains more details of the error.
	Desc string
	// contains filtered or unexported fields
}

Error is the structured error returned by Cloud Spanner client.

Deprecated: Unwrap any error that is returned by the Spanner client as an APIError to access the error details. Do not try to convert the error to the spanner.Error struct, as that struct may be removed in a future release.

Example: var apiErr *apierror.APIError _, err := spanner.NewClient(context.Background()) errors.As(err, &apiErr)

func (*Error) Error (deprecated)

func (e *Error) Error() string

Error implements error.Error.

func (*Error) GRPCStatus (deprecated)

func (e *Error) GRPCStatus() *status.Status

GRPCStatus returns the corresponding gRPC Status of this Spanner error. This allows the error to be converted to a gRPC status using status.Convert(error).

func (*Error) Unwrap (deprecated)

func (e *Error) Unwrap() error

Unwrap returns the wrapped error (if any).

GenericColumnValue

type GenericColumnValue struct {
	Type  *sppb.Type
	Value *proto3.Value
}

GenericColumnValue represents the generic encoded value and type of the column. See google.spanner.v1.ResultSet proto for details. This can be useful for proxying query results when the result types are not known in advance.

If you populate a GenericColumnValue from a row using Row.Column or related methods, do not modify the contents of Type and Value.

func (GenericColumnValue) Decode

func (v GenericColumnValue) Decode(ptr interface{}) error

Decode decodes a GenericColumnValue. The ptr argument should be a pointer to a Go value that can accept v.

Example

package main

import (
	"fmt"

	"cloud.google.com/go/spanner"

	sppb "google.golang.org/genproto/googleapis/spanner/v1"
)

func main() {
	// In real applications, rows can be retrieved by methods like client.Single().ReadRow().
	row, err := spanner.NewRow([]string{"intCol", "strCol"}, []interface{}{42, "my-text"})
	if err != nil {
		// TODO: Handle error.
	}
	for i := 0; i < row.Size(); i++ {
		var col spanner.GenericColumnValue
		if err := row.