- 1.56.1 (latest)
- 1.56.0
- 1.55.0
- 1.54.0
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.42.0
- 1.41.0
- 1.40.0
- 1.39.1
- 1.38.0
- 1.37.0
- 1.36.0
- 1.35.1
- 1.34.1
- 1.33.0
- 1.32.0
- 1.31.0
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.0
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.2
- 1.17.0
- 1.16.1
- 1.15.0
- 1.14.0
- 1.13.0
- 1.12.0
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets.
More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs.
See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package.
Creating a Client
To start working with this package, create a Client:
ctx := context.Background() client, err := storage.NewClient(ctx) if err != nil { // TODO: Handle error. }
The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.
If you only wish to access public data, you can create an unauthenticated client with
client, err := storage.NewClient(ctx, option.WithoutAuthentication())
To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual:
// Set STORAGE_EMULATOR_HOST environment variable. err := os.Setenv("STORAGE_EMULATOR_HOST", "localhost:9000") if err != nil { // TODO: Handle error. } // Create client as usual. client, err := storage.NewClient(ctx) if err != nil { // TODO: Handle error. } // This request is now directed to http://localhost:9000/storage/v1/b // instead of https://storage.googleapis.com/storage/v1/b if err := client.Bucket("my-bucket").Create(ctx, projectID, nil); err != nil { // TODO: Handle error. }
Please note that there is no official emulator for Cloud Storage.
Buckets
A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle:
bkt := client.Bucket(bucketName)
A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create:
if err := bkt.Create(ctx, projectID, nil); err != nil { // TODO: Handle error. }
Note that although buckets are associated with projects, bucket names are global across all projects.
Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs:
attrs, err := bkt.Attrs(ctx) if err != nil { // TODO: Handle error. } fmt.Printf("bucket %s, created at %s, is located in %s with storage class %s\n", attrs.Name, attrs.Created, attrs.Location, attrs.StorageClass)
Objects
An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data:
obj := bkt.Object("data") // Write something to obj. // w implements io.Writer. w := obj.NewWriter(ctx) // Write some text to obj. This will either create the object or overwrite whatever is there already. if _, err := fmt.Fprintf(w, "This object contains text.\n"); err != nil { // TODO: Handle error. } // Close, just like writing a file. if err := w.Close(); err != nil { // TODO: Handle error. } // Read it back. r, err := obj.NewReader(ctx) if err != nil { // TODO: Handle error. } defer r.Close() if _, err := io.Copy(os.Stdout, r); err != nil { // TODO: Handle error. } // Prints "This object contains text."
Objects also have attributes, which you can fetch with ObjectHandle.Attrs:
objAttrs, err := obj.Attrs(ctx) if err != nil { // TODO: Handle error. } fmt.Printf("object %s has size %d and can be read using %s\n", objAttrs.Name, objAttrs.Size, objAttrs.MediaLink)
Listing objects
Listing objects in a bucket is done with the BucketHandle.Objects method:
query := &storage.Query{Prefix: ""} var names []string it := bkt.Objects(ctx, query) for { attrs, err := it.Next() if err == iterator.Done { break } if err != nil { log.Fatal(err) } names = append(names, attrs.Name) }
Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used:
query := &storage.Query{ Prefix: "", StartOffset: "bar/", // Only list objects lexicographically >= "bar/" EndOffset: "foo/", // Only list objects lexicographically < "foo/" } // ... as before
If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process:
query := &storage.Query{Prefix: ""} query.SetAttrSelection([]string{"Name"}) // ... as before
ACLs
Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs.
To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List:
acls, err := obj.ACL().List(ctx) if err != nil { // TODO: Handle error. } for _, rule := range acls { fmt.Printf("%s has role %s\n", rule.Entity, rule.Role) }
You can also set and delete ACLs.
Conditions
Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations.
For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that:
w = obj.If(storage.Conditions{GenerationMatch: objAttrs.Generation}).NewWriter(ctx) // Proceed with writing as above.
Signed URLs
You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL.
url, err := client.Bucket(bucketName).SignedURL(objectName, opts) if err != nil { // TODO: Handle error. } fmt.Println(url)
You can also sign a URL without creating a client. See the documentation of SignedURL for details.
url, err := storage.SignedURL(bucketName, "shared-object", opts) if err != nil { // TODO: Handle error. } fmt.Println(url)
Post Policy V4 Signed Request
A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user.
For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4.
pv4, err := client.Bucket(bucketName).GenerateSignedPostPolicyV4(objectName, opts) if err != nil { // TODO: Handle error. } fmt.Printf("URL: %s\nFields; %v\n", pv4.URL, pv4.Fields)
Credential requirements for signing
If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true:
- you are authenticated to the Storage Client with a service account's downloaded private key, either directly in code or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable (see Other Environments),
- your application is running on Google Compute Engine (GCE), or
- you are logged into gcloud using application default credentials with impersonation enabled.
Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account.
To generate the signature, you must have:
- iam.serviceAccounts.signBlob permissions on the GoogleAccessID service account, and
- the IAM Service Account Credentials API enabled (unless authenticating with a downloaded private key).
Errors
Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example:
var e *googleapi.Error if ok := errors.As(err, &e); ok { if e.Code == 409 { ... } }
Retrying failed requests
Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation.
The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information.
Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example:
o := client.Bucket(bucket).Object(object).Retryer( // Use WithBackoff to change the timing of the exponential backoff. storage.WithBackoff(gax.Backoff{ Initial: 2 * time.Second, }), // Use WithPolicy to configure the idempotency policy. RetryAlways will // retry the operation even if it is non-idempotent. storage.WithPolicy(storage.RetryAlways), ) // Use a context timeout to set an overall deadline on the call, including all // potential retries. ctx, cancel := context.WithTimeout(ctx, 5*time.Second) defer cancel() // Delete an object using the specified strategy and timeout. if err := o.Delete(ctx); err != nil { // Handle err. }
Constants
DeleteAction, SetStorageClassAction, AbortIncompleteMPUAction
const (
// DeleteAction is a lifecycle action that deletes a live and/or archived
// objects. Takes precedence over SetStorageClass actions.
DeleteAction = "Delete"
// SetStorageClassAction changes the storage class of live and/or archived
// objects.
SetStorageClassAction = "SetStorageClass"
// AbortIncompleteMPUAction is a lifecycle action that aborts an incomplete
// multipart upload when the multipart upload meets the conditions specified
// in the lifecycle rule. The AgeInDays condition is the only allowed
// condition for this action. AgeInDays is measured from the time the
// multipart upload was created.
AbortIncompleteMPUAction = "AbortIncompleteMultipartUpload"
)
NoPayload, JSONPayload
const (
// Send no payload with notification messages.
NoPayload = "NONE"
// Send object metadata as JSON with notification messages.
JSONPayload = "JSON_API_V1"
)
Values for Notification.PayloadFormat.
ObjectFinalizeEvent, ObjectMetadataUpdateEvent, ObjectDeleteEvent, ObjectArchiveEvent
const (
// Event that occurs when an object is successfully created.
ObjectFinalizeEvent = "OBJECT_FINALIZE"
// Event that occurs when the metadata of an existing object changes.
ObjectMetadataUpdateEvent = "OBJECT_METADATA_UPDATE"
// Event that occurs when an object is permanently deleted.
ObjectDeleteEvent = "OBJECT_DELETE"
// Event that occurs when the live version of an object becomes an
// archived version.
ObjectArchiveEvent = "OBJECT_ARCHIVE"
)
Values for Notification.EventTypes.
ScopeFullControl, ScopeReadOnly, ScopeReadWrite
const (
// ScopeFullControl grants permissions to manage your
// data and permissions in Google Cloud Storage.
ScopeFullControl = raw.DevstorageFullControlScope
// ScopeReadOnly grants permissions to
// view your data in Google Cloud Storage.
ScopeReadOnly = raw.DevstorageReadOnlyScope
// ScopeReadWrite grants permissions to manage your
// data in Google Cloud Storage.
ScopeReadWrite = raw.DevstorageReadWriteScope
)
Variables
ErrBucketNotExist, ErrObjectNotExist
var (
// ErrBucketNotExist indicates that the bucket does not exist.
ErrBucketNotExist = errors.New("storage: bucket doesn't exist")
// ErrObjectNotExist indicates that the object does not exist.
ErrObjectNotExist = errors.New("storage: object doesn't exist")
)
Functions
func ShouldRetry
ShouldRetry returns true if an error is retryable, based on best practice guidance from GCS. See https://cloud.google.com/storage/docs/retry-strategy#go for more information on what errors are considered retryable.
If you would like to customize retryable errors, use the WithErrorFunc to supply a RetryOption to your library calls. For example, to retry additional errors, you can write a custom func that wraps ShouldRetry and also specifies additional errors that should return true.
func SignedURL
func SignedURL(bucket, object string, opts *SignedURLOptions) (string, error)
SignedURL returns a URL for the specified object. Signed URLs allow anyone access to a restricted resource for a limited time without needing a Google account or signing in. For more information about signed URLs, see https://cloud.google.com/storage/docs/accesscontrol#signed_urls_query_string_authentication If initializing a Storage Client, instead use the Bucket.SignedURL method which uses the Client's credentials to handle authentication.
Example
package main
import (
"fmt"
"io/ioutil"
"time"
"cloud.google.com/go/storage"
)
func main() {
pkey, err := ioutil.ReadFile("my-private-key.pem")
if err != nil {
// TODO: handle error.
}
url, err := storage.SignedURL("my-bucket", "my-object", &storage.SignedURLOptions{
GoogleAccessID: "[email protected]",
PrivateKey: pkey,
Method: "GET",
Expires: time.Now().Add(48 * time.Hour),
})
if err != nil {
// TODO: handle error.
}
fmt.Println(url)
}
ACLEntity
type ACLEntity string
ACLEntity refers to a user or group. They are sometimes referred to as grantees.
It could be in the form of: "user-
Or one of the predefined constants: AllUsers, AllAuthenticatedUsers.
AllUsers, AllAuthenticatedUsers
ACLHandle
type ACLHandle struct {
// contains filtered or unexported fields
}
ACLHandle provides operations on an access control list for a Google Cloud Storage bucket or object. ACLHandle on an object operates on the latest generation of that object by default. Selecting a specific generation of an object is not currently supported by the client.
func (*ACLHandle) Delete
Delete permanently deletes the ACL entry for the given entity.
Example
package main
import (
"context"
"cloud.google.com/go/storage"
)
func main() {
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
// TODO: handle error.
}
// No longer grant access to the bucket to everyone on the Internet.
if err := client.Bucket("my-bucket").ACL().Delete(ctx, storage.AllUsers); err != nil {
// TODO: handle error.
}
}
func (*ACLHandle) List
List retrieves ACL entries.
Example
package main
import (
"context"
"fmt"
"cloud.google.com/go/storage"
)
func main() {
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
// TODO: handle error.
}
// List the default object ACLs for my-bucket.
aclRules, err := client.Bucket("my-bucket").DefaultObjectACL().List(ctx)
if err != nil {
// TODO: handle error.
}
fmt.Println(aclRules)
}
func (*ACLHandle) Set
Set sets the role for the given entity.
Example
package main
import (
"context"
"cloud.google.com/go/storage"
)
func main() {
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
// TODO: handle error.
}
// Let any authenticated user read my-bucket/my-object.
obj := client.Bucket("my-bucket").Object("my-object")
if err := obj.ACL().Set(ctx, storage.AllAuthenticatedUsers, storage.RoleReader); err != nil {
// TODO: handle error.
}
}
ACLRole
type ACLRole string
ACLRole is the level of access to grant.
RoleOwner, RoleReader, RoleWriter
ACLRule
type ACLRule struct {
Entity ACLEntity
EntityID string
Role ACLRole
Domain string
Email string
ProjectTeam *ProjectTeam
}
ACLRule represents a grant for a role to an entity (user, group or team) for a Google Cloud Storage object or bucket.
Autoclass
type Autoclass struct {
// Enabled specifies whether the autoclass feature is enabled
// on the bucket.
Enabled bool
// ToggleTime is the time from which Autoclass was last toggled.
// If Autoclass is enabled when the bucket is created, the ToggleTime
// is set to the bucket creation time. This field is read-only.
ToggleTime time.Time
}
Autoclass holds the bucket's autoclass configuration. If enabled, allows for the automatic selection of the best storage class based on object access patterns. See https://cloud.google.com/storage/docs/using-autoclass for more information.
BucketAttrs
type BucketAttrs struct {
// Name is the name of the bucket.
// This field is read-only.
Name string
// ACL is the list of access control rules on the bucket.
ACL []ACLRule
// BucketPolicyOnly is an alias for UniformBucketLevelAccess. Use of
// UniformBucketLevelAccess is recommended above the use of this field.
// Setting BucketPolicyOnly.Enabled OR UniformBucketLevelAccess.Enabled to
// true, will enable UniformBucketLevelAccess.
BucketPolicyOnly BucketPolicyOnly
// UniformBucketLevelAccess configures access checks to use only bucket-level IAM
// policies and ignore any ACL rules for the bucket.
// See https://cloud.google.com/storage/docs/uniform-bucket-level-access
// for more information.
UniformBucketLevelAccess UniformBucketLevelAccess
// PublicAccessPrevention is the setting for the bucket's
// PublicAccessPrevention policy, which can be used to prevent public access
// of data in the bucket. See
// https://cloud.google.com/storage/docs/public-access-prevention for more
// information.
PublicAccessPrevention PublicAccessPrevention
// DefaultObjectACL is the list of access controls to
// apply to new objects when no object ACL is provided.
DefaultObjectACL []ACLRule
// DefaultEventBasedHold is the default value for event-based hold on
// newly created objects in this bucket. It defaults to false.
DefaultEventBasedHold bool
// If not empty, applies a predefined set of access controls. It should be set
// only when creating a bucket.
// It is always empty for BucketAttrs returned from the service.
// See https://cloud.google.com/storage/docs/json_api/v1/buckets/insert
// for valid values.
PredefinedACL string
// If not empty, applies a predefined set of default object access controls.
// It should be set only when creating a bucket.
// It is always empty for BucketAttrs returned from the service.
// See https://cloud.google.com/storage/docs/json_api/v1/buckets/insert
// for valid values.
PredefinedDefaultObjectACL string
// Location is the location of the bucket. It defaults to "US".
// If specifying a dual-region, CustomPlacementConfig should be set in conjunction.
Location string
// The bucket's custom placement configuration that holds a list of
// regional locations for custom dual regions.
CustomPlacementConfig *CustomPlacementConfig
// MetaGeneration is the metadata generation of the bucket.
// This field is read-only.
MetaGeneration int64
// StorageClass is the default storage class of the bucket. This defines
// how objects in the bucket are stored and determines the SLA
// and the cost of storage. Typical values are "STANDARD", "NEARLINE",
// "COLDLINE" and "ARCHIVE". Defaults to "STANDARD".
// See https://cloud.google.com/storage/docs/storage-classes for all
// valid values.
StorageClass string
// Created is the creation time of the bucket.
// This field is read-only.
Created time.Time
// VersioningEnabled reports whether this bucket has versioning enabled.
VersioningEnabled bool
// Labels are the bucket's labels.
Labels map[string]string
// RequesterPays reports whether the bucket is a Requester Pays bucket.
// Clients performing operations on Requester Pays buckets must provide
// a user project (see BucketHandle.UserProject), which will be billed
// for the operations.
RequesterPays bool
// Lifecycle is the lifecycle configuration for objects in the bucket.
Lifecycle Lifecycle
// Retention policy enforces a minimum retention time for all objects
// contained in the bucket. A RetentionPolicy of nil implies the bucket
// has no minimum data retention.
//
// This feature is in private alpha release. It is not currently available to
// most customers. It might be changed in backwards-incompatible ways and is not
// subject to any SLA or deprecation policy.
RetentionPolicy *RetentionPolicy
// The bucket's Cross-Origin Resource Sharing (CORS) configuration.
CORS []CORS
// The encryption configuration used by default for newly inserted objects.
Encryption *BucketEncryption
// The logging configuration.
Logging *BucketLogging
// The website configuration.
Website *BucketWebsite
// Etag is the HTTP/1.1 Entity tag for the bucket.
// This field is read-only.
Etag string
// LocationType describes how data is stored and replicated.
// Typical values are "multi-region", "region" and "dual-region".
// This field is read-only.
LocationType string
// The project number of the project the bucket belongs to.
// This field is read-only.
ProjectNumber uint64
// RPO configures the Recovery Point Objective (RPO) policy of the bucket.
// Set to RPOAsyncTurbo to turn on Turbo Replication for a bucket.
// See https://cloud.google.com/storage/docs/managing-turbo-replication for
// more information.
RPO RPO
// Autoclass holds the bucket's autoclass configuration. If enabled,
// allows for the automatic selection of the best storage class
// based on object access patterns.
Autoclass *Autoclass
}
BucketAttrs represents the metadata for a Google Cloud Storage bucket. Read-only fields are ignored by BucketHandle.Create.
BucketAttrsToUpdate
type BucketAttrsToUpdate struct {
// If set, updates whether the bucket uses versioning.
VersioningEnabled optional.Bool
// If set, updates whether the bucket is a Requester Pays bucket.
RequesterPays optional.Bool
// DefaultEventBasedHold is the default value for event-based hold on
// newly created objects in this bucket.
DefaultEventBasedHold optional.Bool
// BucketPolicyOnly is an alias for UniformBucketLevelAccess. Use of
// UniformBucketLevelAccess is recommended above the use of this field.
// Setting BucketPolicyOnly.Enabled OR UniformBucketLevelAccess.Enabled to
// true, will enable UniformBucketLevelAccess. If both BucketPolicyOnly and
// UniformBucketLevelAccess are set, the value of UniformBucketLevelAccess
// will take precedence.
BucketPolicyOnly *BucketPolicyOnly
// UniformBucketLevelAccess configures access checks to use only bucket-level IAM
// policies and ignore any ACL rules for the bucket.
// See https://cloud.google.com/storage/docs/uniform-bucket-level-access
// for more information.
UniformBucketLevelAccess *UniformBucketLevelAccess
// PublicAccessPrevention is the setting for the bucket's
// PublicAccessPrevention policy, which can be used to prevent public access
// of data in the bucket. See
// https://cloud.google.com/storage/docs/public-access-prevention for more
// information.
PublicAccessPrevention PublicAccessPrevention
// StorageClass is the default storage class of the bucket. This defines
// how objects in the bucket are stored and determines the SLA
// and the cost of storage. Typical values are "STANDARD", "NEARLINE",
// "COLDLINE" and "ARCHIVE". Defaults to "STANDARD".
// See https://cloud.google.com/storage/docs/storage-classes for all
// valid values.
StorageClass string
// If set, updates the retention policy of the bucket. Using
// RetentionPolicy.RetentionPeriod = 0 will delete the existing policy.
//
// This feature is in private alpha release. It is not currently available to
// most customers. It might be changed in backwards-incompatible ways and is not
// subject to any SLA or deprecation policy.
RetentionPolicy *RetentionPolicy
// If set, replaces the CORS configuration with a new configuration.
// An empty (rather than nil) slice causes all CORS policies to be removed.
CORS []CORS
// If set, replaces the encryption configuration of the bucket. Using
// BucketEncryption.DefaultKMSKeyName = "" will delete the existing
// configuration.
Encryption *BucketEncryption
// If set, replaces the lifecycle configuration of the bucket.
Lifecycle *Lifecycle
// If set, replaces the logging configuration of the bucket.
Logging *BucketLogging
// If set, replaces the website configuration of the bucket.
Website *BucketWebsite
// If not empty, applies a predefined set of access controls.
// See https://cloud.google.com/storage/docs/json_api/v1/buckets/patch.
PredefinedACL string
// If not empty, applies a predefined set of default object access controls.
// See https://cloud.google.com/storage/docs/json_api/v1/buckets/patch.
PredefinedDefaultObjectACL string
// RPO configures the Recovery Point Objective (RPO) policy of the bucket.
// Set to RPOAsyncTurbo to turn on Turbo Replication for a bucket.
// See https://cloud.google.com/storage/docs/managing-turbo-replication for
// more information.
RPO RPO
// If set, updates the autoclass configuration of the bucket.
// See https://cloud.google.com/storage/docs/using-autoclass for more information.
Autoclass *Autoclass
// contains filtered or unexported fields
}
BucketAttrsToUpdate define the attributes to update during an Update call.
func (*BucketAttrsToUpdate) DeleteLabel
func (ua *BucketAttrsToUpdate) DeleteLabel(name string)
DeleteLabel causes a label to be deleted when ua is used in a call to Bucket.Update.
func (*BucketAttrsToUpdate) SetLabel
func (ua *BucketAttrsToUpdate) SetLabel(name, value string)
SetLabel causes a label to be added or modified when ua is used in a call to Bucket.Update.
BucketConditions
type BucketConditions struct {
// MetagenerationMatch specifies that the bucket must have the given
// metageneration for the operation to occur.
// If MetagenerationMatch is zero, it has no effect.
MetagenerationMatch int64
// MetagenerationNotMatch specifies that the bucket must not have the given
// metageneration for the operation to occur.
// If MetagenerationNotMatch is zero, it has no effect.
MetagenerationNotMatch int64
}
BucketConditions constrain bucket methods to act on specific metagenerations.
The zero value is an empty set of constraints.
BucketEncryption
type BucketEncryption struct {
// A Cloud KMS key name, in the form
// projects/P/locations/L/keyRings/R/cryptoKeys/K, that will be used to encrypt
// objects inserted into this bucket, if no encryption method is specified.
// The key's location must be the same as the bucket's.
DefaultKMSKeyName string
}
BucketEncryption is a bucket's encryption configuration.
BucketHandle
type BucketHandle struct {
// contains filtered or unexported fields
}
BucketHandle provides operations on a Google Cloud Storage bucket. Use Client.Bucket to get a handle.
Example
exists
package main
import (
"context"
"fmt"
"cloud.google.com/go/storage"
)
func main() {
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
// TODO: handle error.
}
attrs, err := client.Bucket("my-bucket").Attrs(ctx)
if err == storage.ErrBucketNotExist {
fmt.Println("The bucket does not exist")
return
}
if err != nil {
// TODO: handle error.
}
fmt.Printf("The bucket exists and has attributes: %#v\n", attrs)
}
func (*BucketHandle) ACL
func (b *BucketHandle) ACL() *ACLHandle
ACL returns an ACLHandle, which provides access to the bucket's access control list. This controls who can list, create or overwrite the objects in a bucket. This call does not perform any network operations.
func (*BucketHandle) AddNotification
func (b *BucketHandle) AddNotification(ctx context.Context, n *Notification) (ret *Notification, err error)
AddNotification adds a notification to b. You must set n's TopicProjectID, TopicID and PayloadFormat, and must not set its ID. The other fields are all optional. The returned Notification's ID can be used to refer to it.
Example
package main
import (
"context"
"fmt"
"cloud.google.com/go/storage"
)
func main() {
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
// TODO: handle error.
}
b := client.Bucket("my-bucket")
n, err := b.AddNotification(ctx, &storage.Notification{
TopicProjectID: "my-project",
TopicID: "my-topic",
PayloadFormat: storage.JSONPayload,
})
if err != nil {
// TODO: handle error.
}
fmt.Println(n.ID)
}
func (*BucketHandle) Attrs
func (b *BucketHandle) Attrs(ctx context.Context) (attrs *BucketAttrs, err error)
Attrs returns the metadata for the bucket.
Example
package main
import (
"context"
"fmt"
"cloud.google.com/go/storage"
)
func main() {
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
// TODO: handle error.
}
attrs, err := client.