Index
DlpService
(interface)Action
(message)Action.Deidentify
(message)Action.JobNotificationEmails
(message)Action.PublishFindingsToCloudDataCatalog
(message)Action.PublishFindingsToDataplexCatalog
(message)Action.PublishSummaryToCscc
(message)Action.PublishToPubSub
(message)Action.PublishToStackdriver
(message)Action.SaveFindings
(message)ActionDetails
(message)ActivateJobTriggerRequest
(message)AllOtherDatabaseResources
(message)AllOtherResources
(message)AmazonS3Bucket
(message)AmazonS3BucketConditions
(message)AmazonS3BucketConditions.BucketType
(enum)AmazonS3BucketConditions.ObjectStorageClass
(enum)AmazonS3BucketRegex
(message)AnalyzeDataSourceRiskDetails
(message)AnalyzeDataSourceRiskDetails.CategoricalStatsResult
(message)AnalyzeDataSourceRiskDetails.CategoricalStatsResult.CategoricalStatsHistogramBucket
(message)AnalyzeDataSourceRiskDetails.DeltaPresenceEstimationResult
(message)AnalyzeDataSourceRiskDetails.DeltaPresenceEstimationResult.DeltaPresenceEstimationHistogramBucket
(message)AnalyzeDataSourceRiskDetails.DeltaPresenceEstimationResult.DeltaPresenceEstimationQuasiIdValues
(message)AnalyzeDataSourceRiskDetails.KAnonymityResult
(message)AnalyzeDataSourceRiskDetails.KAnonymityResult.KAnonymityEquivalenceClass
(message)AnalyzeDataSourceRiskDetails.KAnonymityResult.KAnonymityHistogramBucket
(message)AnalyzeDataSourceRiskDetails.KMapEstimationResult
(message)AnalyzeDataSourceRiskDetails.KMapEstimationResult.KMapEstimationHistogramBucket
(message)AnalyzeDataSourceRiskDetails.KMapEstimationResult.KMapEstimationQuasiIdValues
(message)AnalyzeDataSourceRiskDetails.LDiversityResult
(message)AnalyzeDataSourceRiskDetails.LDiversityResult.LDiversityEquivalenceClass
(message)AnalyzeDataSourceRiskDetails.LDiversityResult.LDiversityHistogramBucket
(message)AnalyzeDataSourceRiskDetails.NumericalStatsResult
(message)AnalyzeDataSourceRiskDetails.RequestedRiskAnalysisOptions
(message)AwsAccount
(message)AwsAccountRegex
(message)BigQueryDiscoveryTarget
(message)BigQueryField
(message)BigQueryKey
(message)BigQueryOptions
(message)BigQueryOptions.SampleMethod
(enum)BigQueryRegex
(message)BigQueryRegexes
(message)BigQuerySchemaModification
(enum)BigQueryTable
(message)BigQueryTableCollection
(message)BigQueryTableModification
(enum)BigQueryTableType
(enum)BigQueryTableTypeCollection
(enum)BigQueryTableTypes
(message)BoundingBox
(message)BucketingConfig
(message)BucketingConfig.Bucket
(message)ByteContentItem
(message)ByteContentItem.BytesType
(enum)CancelDlpJobRequest
(message)CharacterMaskConfig
(message)CharsToIgnore
(message)CharsToIgnore.CommonCharsToIgnore
(enum)CloudSqlDiscoveryTarget
(message)CloudSqlIamCredential
(message)CloudSqlProperties
(message)CloudSqlProperties.DatabaseEngine
(enum)CloudStorageDiscoveryTarget
(message)CloudStorageFileSet
(message)CloudStorageOptions
(message)CloudStorageOptions.FileSet
(message)CloudStorageOptions.SampleMethod
(enum)CloudStoragePath
(message)CloudStorageRegex
(message)CloudStorageRegexFileSet
(message)CloudStorageResourceReference
(message)Color
(message)ColumnDataProfile
(message)ColumnDataProfile.ColumnDataType
(enum)ColumnDataProfile.ColumnPolicyState
(enum)ColumnDataProfile.State
(enum)Connection
(message)ConnectionState
(enum)Container
(message)ContentItem
(message)ContentLocation
(message)ContentOption
(enum)CreateConnectionRequest
(message)CreateDeidentifyTemplateRequest
(message)CreateDiscoveryConfigRequest
(message)CreateDlpJobRequest
(message)CreateInspectTemplateRequest
(message)CreateJobTriggerRequest
(message)CreateStoredInfoTypeRequest
(message)CryptoDeterministicConfig
(message)CryptoHashConfig
(message)CryptoKey
(message)CryptoReplaceFfxFpeConfig
(message)CryptoReplaceFfxFpeConfig.FfxCommonNativeAlphabet
(enum)CustomInfoType
(message)CustomInfoType.DetectionRule
(message)CustomInfoType.DetectionRule.HotwordRule
(message)CustomInfoType.DetectionRule.LikelihoodAdjustment
(message)CustomInfoType.DetectionRule.Proximity
(message)CustomInfoType.Dictionary
(message)CustomInfoType.Dictionary.WordList
(message)CustomInfoType.ExclusionType
(enum)CustomInfoType.Regex
(message)CustomInfoType.SurrogateType
(message)DataProfileAction
(message)DataProfileAction.EventType
(enum)DataProfileAction.Export
(message)DataProfileAction.PubSubNotification
(message)DataProfileAction.PubSubNotification.DetailLevel
(enum)DataProfileAction.PublishToChronicle
(message)DataProfileAction.PublishToDataplexCatalog
(message)DataProfileAction.PublishToSecurityCommandCenter
(message)DataProfileAction.TagResources
(message)DataProfileAction.TagResources.TagCondition
(message)DataProfileAction.TagResources.TagValue
(message)DataProfileBigQueryRowSchema
(message)DataProfileConfigSnapshot
(message)DataProfileFinding
(message)DataProfileFindingLocation
(message)DataProfileFindingRecordLocation
(message)DataProfileJobConfig
(message)DataProfileLocation
(message)DataProfilePubSubCondition
(message)DataProfilePubSubCondition.ProfileScoreBucket
(enum)DataProfilePubSubCondition.PubSubCondition
(message)DataProfilePubSubCondition.PubSubExpressions
(message)DataProfilePubSubCondition.PubSubExpressions.PubSubLogicalOperator
(enum)DataProfilePubSubMessage
(message)DataProfileUpdateFrequency
(enum)DataRiskLevel
(message)DataRiskLevel.DataRiskLevelScore
(enum)DataSourceType
(message)DatabaseResourceCollection
(message)DatabaseResourceReference
(message)DatabaseResourceRegex
(message)DatabaseResourceRegexes
(message)DatastoreKey
(message)DatastoreOptions
(message)DateShiftConfig
(message)DateTime
(message)DateTime.TimeZone
(message)DeidentifyConfig
(message)DeidentifyContentRequest
(message)DeidentifyContentResponse
(message)DeidentifyDataSourceDetails
(message)DeidentifyDataSourceDetails.RequestedDeidentifyOptions
(message)DeidentifyDataSourceStats
(message)DeidentifyTemplate
(message)DeleteConnectionRequest
(message)DeleteDeidentifyTemplateRequest
(message)DeleteDiscoveryConfigRequest
(message)DeleteDlpJobRequest
(message)DeleteFileStoreDataProfileRequest
(message)DeleteInspectTemplateRequest
(message)DeleteJobTriggerRequest
(message)DeleteStoredInfoTypeRequest
(message)DeleteTableDataProfileRequest
(message)Disabled
(message)DiscoveryBigQueryConditions
(message)DiscoveryBigQueryConditions.OrConditions
(message)DiscoveryBigQueryFilter
(message)DiscoveryBigQueryFilter.AllOtherBigQueryTables
(message)DiscoveryCloudSqlConditions
(message)DiscoveryCloudSqlConditions.DatabaseEngine
(enum)DiscoveryCloudSqlConditions.DatabaseResourceType
(enum)DiscoveryCloudSqlFilter
(message)DiscoveryCloudSqlGenerationCadence
(message)DiscoveryCloudSqlGenerationCadence.SchemaModifiedCadence
(message)DiscoveryCloudSqlGenerationCadence.SchemaModifiedCadence.CloudSqlSchemaModification
(enum)DiscoveryCloudStorageConditions
(message)DiscoveryCloudStorageConditions.CloudStorageBucketAttribute
(enum)DiscoveryCloudStorageConditions.CloudStorageObjectAttribute
(enum)DiscoveryCloudStorageFilter
(message)DiscoveryCloudStorageGenerationCadence
(message)DiscoveryConfig
(message)DiscoveryConfig.OrgConfig
(message)DiscoveryConfig.Status
(enum)DiscoveryFileStoreConditions
(message)DiscoveryGenerationCadence
(message)DiscoveryInspectTemplateModifiedCadence
(message)DiscoveryOtherCloudConditions
(message)DiscoveryOtherCloudFilter
(message)DiscoveryOtherCloudGenerationCadence
(message)DiscoverySchemaModifiedCadence
(message)DiscoveryStartingLocation
(message)DiscoveryTableModifiedCadence
(message)DiscoveryTarget
(message)DiscoveryVertexDatasetConditions
(message)DiscoveryVertexDatasetFilter
(message)DiscoveryVertexDatasetGenerationCadence
(message)DlpJob
(message)DlpJob.JobState
(enum)DlpJobType
(enum)DocumentLocation
(message)Domain
(message)Domain.Category
(enum)Domain.Signal
(enum)EncryptionStatus
(enum)EntityId
(message)Error
(message)Error.ErrorExtraInfo
(enum)ExcludeByHotword
(message)ExcludeInfoTypes
(message)ExclusionRule
(message)FieldId
(message)FieldTransformation
(message)FileClusterSummary
(message)FileClusterType
(message)FileClusterType.Cluster
(enum)FileExtensionInfo
(message)FileStoreCollection
(message)FileStoreDataProfile
(message)FileStoreDataProfile.State
(enum)FileStoreInfoTypeSummary
(message)FileStoreRegex
(message)FileStoreRegexes
(message)FileType
(enum)Finding
(message)FinishDlpJobRequest
(message)FixedSizeBucketingConfig
(message)GetColumnDataProfileRequest
(message)GetConnectionRequest
(message)GetDeidentifyTemplateRequest
(message)GetDiscoveryConfigRequest
(message)GetDlpJobRequest
(message)GetFileStoreDataProfileRequest
(message)GetInspectTemplateRequest
(message)GetJobTriggerRequest
(message)GetProjectDataProfileRequest
(message)GetStoredInfoTypeRequest
(message)GetTableDataProfileRequest
(message)HybridContentItem
(message)HybridFindingDetails
(message)HybridInspectDlpJobRequest
(message)HybridInspectJobTriggerRequest
(message)HybridInspectResponse
(message)HybridInspectStatistics
(message)HybridOptions
(message)ImageLocation
(message)ImageTransformations
(message)ImageTransformations.ImageTransformation
(message)ImageTransformations.ImageTransformation.AllInfoTypes
(message)ImageTransformations.ImageTransformation.AllText
(message)ImageTransformations.ImageTransformation.SelectedInfoTypes
(message)InfoType
(message)InfoTypeCategory
(message)InfoTypeCategory.IndustryCategory
(enum)InfoTypeCategory.LocationCategory
(enum)InfoTypeCategory.TypeCategory
(enum)InfoTypeDescription
(message)InfoTypeStats
(message)InfoTypeSummary
(message)InfoTypeSupportedBy
(enum)InfoTypeTransformations
(message)InfoTypeTransformations.InfoTypeTransformation
(message)InspectConfig
(message)InspectConfig.FindingLimits
(message)InspectConfig.FindingLimits.InfoTypeLimit
(message)InspectConfig.InfoTypeLikelihood
(message)InspectContentRequest
(message)InspectContentResponse
(message)InspectDataSourceDetails
(message)InspectDataSourceDetails.RequestedOptions
(message)InspectDataSourceDetails.Result
(message)InspectJobConfig
(message)InspectResult
(message)InspectTemplate
(message)InspectionRule
(message)InspectionRuleSet
(message)JobTrigger
(message)JobTrigger.Status
(enum)JobTrigger.Trigger
(message)Key
(message)Key.PathElement
(message)KindExpression
(message)KmsWrappedCryptoKey
(message)LargeCustomDictionaryConfig
(message)LargeCustomDictionaryStats
(message)Likelihood
(enum)ListColumnDataProfilesRequest
(message)ListColumnDataProfilesResponse
(message)ListConnectionsRequest
(message)ListConnectionsResponse
(message)ListDeidentifyTemplatesRequest
(message)ListDeidentifyTemplatesResponse
(message)ListDiscoveryConfigsRequest
(message)ListDiscoveryConfigsResponse
(message)ListDlpJobsRequest
(message)ListDlpJobsResponse
(message)ListFileStoreDataProfilesRequest
(message)ListFileStoreDataProfilesResponse
(message)ListInfoTypesRequest
(message)ListInfoTypesResponse
(message)ListInspectTemplatesRequest
(message)ListInspectTemplatesResponse
(message)ListJobTriggersRequest
(message)ListJobTriggersResponse
(message)ListProjectDataProfilesRequest
(message)ListProjectDataProfilesResponse
(message)ListStoredInfoTypesRequest
(message)ListStoredInfoTypesResponse
(message)ListTableDataProfilesRequest
(message)ListTableDataProfilesResponse
(message)Location
(message)LocationSupport
(message)LocationSupport.RegionalizationScope
(enum)Manual
(message)MatchingType
(enum)MetadataLocation
(message)MetadataType
(enum)NullPercentageLevel
(enum)OtherCloudDiscoveryStartingLocation
(message)OtherCloudDiscoveryStartingLocation.AwsDiscoveryStartingLocation
(message)OtherCloudDiscoveryTarget
(message)OtherCloudResourceCollection
(message)OtherCloudResourceRegex
(message)OtherCloudResourceRegexes
(message)OtherCloudSingleResourceReference
(message)OtherInfoTypeSummary
(message)OutputStorageConfig
(message)OutputStorageConfig.OutputSchema
(enum)PartitionId
(message)PrimitiveTransformation
(message)PrivacyMetric
(message)PrivacyMetric.CategoricalStatsConfig
(message)PrivacyMetric.DeltaPresenceEstimationConfig
(message)PrivacyMetric.KAnonymityConfig
(message)PrivacyMetric.KMapEstimationConfig
(message)PrivacyMetric.KMapEstimationConfig.AuxiliaryTable
(message)PrivacyMetric.KMapEstimationConfig.AuxiliaryTable.QuasiIdField
(message)PrivacyMetric.KMapEstimationConfig.TaggedField
(message)PrivacyMetric.LDiversityConfig
(message)PrivacyMetric.NumericalStatsConfig
(message)ProcessingLocation
(message)ProcessingLocation.DocumentFallbackLocation
(message)ProcessingLocation.GlobalProcessing
(message)ProcessingLocation.ImageFallbackLocation
(message)ProcessingLocation.MultiRegionProcessing
(message)ProfileGeneration
(enum)ProfileStatus
(message)ProjectDataProfile
(message)QuasiId
(message)QuoteInfo
(message)Range
(message)RecordCondition
(message)RecordCondition.Condition
(message)RecordCondition.Conditions
(message)RecordCondition.Expressions
(message)RecordCondition.Expressions.LogicalOperator
(enum)RecordKey
(message)RecordLocation
(message)RecordSuppression
(message)RecordTransformation
(message)RecordTransformations
(message)RedactConfig
(message)RedactImageRequest
(message)RedactImageRequest.ImageRedactionConfig
(message)RedactImageResponse
(message)ReidentifyContentRequest
(message)ReidentifyContentResponse
(message)RelatedResource
(message)RelationalOperator
(enum)ReplaceDictionaryConfig
(message)ReplaceValueConfig
(message)ReplaceWithInfoTypeConfig
(message)ResourceVisibility
(enum)RiskAnalysisJobConfig
(message)SaveToGcsFindingsOutput
(message)Schedule
(message)SearchConnectionsRequest
(message)SearchConnectionsResponse
(message)SecretManagerCredential
(message)SecretsDiscoveryTarget
(message)SensitivityScore
(message)SensitivityScore.SensitivityScoreLevel
(enum)StatisticalTable
(message)StatisticalTable.QuasiIdentifierField
(message)StorageConfig
(message)StorageConfig.TimespanConfig
(message)StorageMetadataLabel
(message)StoredInfoType
(message)StoredInfoTypeConfig
(message)StoredInfoTypeState
(enum)StoredInfoTypeStats
(message)StoredInfoTypeVersion
(message)StoredType
(message)Table
(message)Table.Row
(message)TableDataProfile
(message)TableDataProfile.State
(enum)TableLocation
(message)TableOptions
(message)TableReference
(message)Tag
(message)TagFilter
(message)TagFilters
(message)TimePartConfig
(message)TimePartConfig.TimePart
(enum)TransformationConfig
(message)TransformationContainerType
(enum)TransformationDescription
(message)TransformationDetails
(message)TransformationDetailsStorageConfig
(message)TransformationErrorHandling
(message)TransformationErrorHandling.LeaveUntransformed
(message)TransformationErrorHandling.ThrowError
(message)TransformationLocation
(message)TransformationOverview
(message)TransformationResultStatus
(message)TransformationResultStatusType
(enum)TransformationSummary
(message)TransformationSummary.SummaryResult
(message)TransformationSummary.TransformationResultCode
(enum)TransformationType
(enum)TransientCryptoKey
(message)UniquenessScoreLevel
(enum)UnwrappedCryptoKey
(message)UpdateConnectionRequest
(message)UpdateDeidentifyTemplateRequest
(message)UpdateDiscoveryConfigRequest
(message)UpdateInspectTemplateRequest
(message)UpdateJobTriggerRequest
(message)UpdateStoredInfoTypeRequest
(message)Value
(message)ValueFrequency
(message)VersionDescription
(message)VertexDatasetCollection
(message)VertexDatasetDiscoveryTarget
(message)VertexDatasetRegex
(message)VertexDatasetRegexes
(message)VertexDatasetResourceReference
(message)
DlpService
Sensitive Data Protection provides access to a powerful sensitive data inspection, classification, and de-identification platform that works on text, images, and Google Cloud storage repositories. To learn more about concepts and find how-to guides see https://cloud.google.com/sensitive-data-protection/docs/.
ActivateJobTrigger |
---|
Activate a job trigger. Causes the immediate execute of a trigger instead of waiting on the trigger event to occur.
|
CancelDlpJob |
---|
Starts asynchronous cancellation on a long-running DlpJob. The server makes a best effort to cancel the DlpJob, but success is not guaranteed. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.
|
CreateConnection |
---|
Create a Connection to an external data source.
|
CreateDeidentifyTemplate |
---|
Creates a DeidentifyTemplate for reusing frequently used configuration for de-identifying content, images, and storage. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
CreateDiscoveryConfig |
---|
Creates a config for discovery to scan and profile storage.
|
CreateDlpJob |
---|
Creates a new job to inspect storage or calculate risk metrics. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more. When no InfoTypes or CustomInfoTypes are specified in inspect jobs, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.
|
CreateInspectTemplate |
---|
Creates an InspectTemplate for reusing frequently used configuration for inspecting content, images, and storage. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
CreateJobTrigger |
---|
Creates a job trigger to run DLP actions such as scanning storage for sensitive information on a set schedule. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
CreateStoredInfoType |
---|
Creates a pre-built stored infoType to be used for inspection. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
DeidentifyContent |
---|
De-identifies potentially sensitive info from a ContentItem. This method has limits on input size and output size. See https://cloud.google.com/sensitive-data-protection/docs/deidentify-sensitive-data to learn more. When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated.
|
DeleteConnection |
---|
Delete a Connection.
|
DeleteDeidentifyTemplate |
---|
Deletes a DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
DeleteDiscoveryConfig |
---|
Deletes a discovery configuration.
|
DeleteDlpJob |
---|
Deletes a long-running DlpJob. This method indicates that the client is no longer interested in the DlpJob result. The job will be canceled if possible. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.
|
DeleteFileStoreDataProfile |
---|
Delete a FileStoreDataProfile. Will not prevent the profile from being regenerated if the resource is still included in a discovery configuration.
|
DeleteInspectTemplate |
---|
Deletes an InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
DeleteJobTrigger |
---|
Deletes a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
DeleteStoredInfoType |
---|
Deletes a stored infoType. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
DeleteTableDataProfile |
---|
Delete a TableDataProfile. Will not prevent the profile from being regenerated if the table is still included in a discovery configuration.
|
FinishDlpJob |
---|
Finish a running hybrid DlpJob. Triggers the finalization steps and running of any enabled actions that have not yet run.
|
GetColumnDataProfile |
---|
Gets a column data profile.
|
GetConnection |
---|
Get a Connection by name.
|
GetDeidentifyTemplate |
---|
Gets a DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
GetDiscoveryConfig |
---|
Gets a discovery configuration.
|
GetDlpJob |
---|
Gets the latest state of a long-running DlpJob. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.
|
GetFileStoreDataProfile |
---|
Gets a file store data profile.
|
GetInspectTemplate |
---|
Gets an InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
GetJobTrigger |
---|
Gets a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
GetProjectDataProfile |
---|
Gets a project data profile.
|
GetStoredInfoType |
---|
Gets a stored infoType. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
GetTableDataProfile |
---|
Gets a table data profile.
|
HybridInspectDlpJob |
---|
Inspect hybrid content and store findings to a job. To review the findings, inspect the job. Inspection will occur asynchronously.
|
HybridInspectJobTrigger |
---|
Inspect hybrid content and store findings to a trigger. The inspection will be processed asynchronously. To review the findings monitor the jobs within the trigger.
|
InspectContent |
---|
Finds potentially sensitive info in content. This method has limits on input size, processing time, and output size. When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated. For how to guides, see https://cloud.google.com/sensitive-data-protection/docs/inspecting-images and https://cloud.google.com/sensitive-data-protection/docs/inspecting-text,
|
ListColumnDataProfiles |
---|
Lists column data profiles for an organization.
|
ListConnections |
---|
Lists Connections in a parent. Use SearchConnections to see all connections within an organization.
|
ListDeidentifyTemplates |
---|
Lists DeidentifyTemplates. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
ListDiscoveryConfigs |
---|
Lists discovery configurations.
|
ListDlpJobs |
---|
Lists DlpJobs that match the specified filter in the request. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-storage and https://cloud.google.com/sensitive-data-protection/docs/compute-risk-analysis to learn more.
|
ListFileStoreDataProfiles |
---|
Lists file store data profiles for an organization.
|
ListInfoTypes |
---|
Returns a list of the sensitive information types that the DLP API supports. See https://cloud.google.com/sensitive-data-protection/docs/infotypes-reference to learn more.
|
ListInspectTemplates |
---|
Lists InspectTemplates. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
ListJobTriggers |
---|
Lists job triggers. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
ListProjectDataProfiles |
---|
Lists project data profiles for an organization.
|
ListStoredInfoTypes |
---|
Lists stored infoTypes. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
ListTableDataProfiles |
---|
Lists table data profiles for an organization.
|
RedactImage |
---|
Redacts potentially sensitive info from an image. This method has limits on input size, processing time, and output size. See https://cloud.google.com/sensitive-data-protection/docs/redacting-sensitive-data-images to learn more. When no InfoTypes or CustomInfoTypes are specified in this request, the system will automatically choose what detectors to run. By default this may be all types, but may change over time as detectors are updated. Only the first frame of each multiframe image is redacted. Metadata and other frames are omitted in the response.
|
ReidentifyContent |
---|
Re-identifies content that has been de-identified. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization#re-identification_in_free_text_code_example to learn more.
|
SearchConnections |
---|
Searches for Connections in a parent.
|
UpdateConnection |
---|
Update a Connection.
|
UpdateDeidentifyTemplate |
---|
Updates the DeidentifyTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates-deid to learn more.
|
UpdateDiscoveryConfig |
---|
Updates a discovery configuration.
|
UpdateInspectTemplate |
---|
Updates the InspectTemplate. See https://cloud.google.com/sensitive-data-protection/docs/creating-templates to learn more.
|
UpdateJobTrigger |
---|
Updates a job trigger. See https://cloud.google.com/sensitive-data-protection/docs/creating-job-triggers to learn more.
|
UpdateStoredInfoType |
---|
Updates the stored infoType by creating a new version. The existing version will continue to be used until the new version is ready. See https://cloud.google.com/sensitive-data-protection/docs/creating-stored-infotypes to learn more.
|
Action
A task to execute on the completion of a job. See https://cloud.google.com/sensitive-data-protection/docs/concepts-actions to learn more.
Fields | |
---|---|
Union field action . Extra events to execute after the job has finished. action can be only one of the following: |
|
save_findings |
Save resulting findings in a provided location. |
pub_sub |
Publish a notification to a Pub/Sub topic. |
publish_summary_to_cscc |
Publish summary to Cloud Security Command Center (Alpha). |
publish_findings_to_cloud_data_catalog |
Publish findings to Cloud Datahub. |
publish_findings_to_dataplex_catalog |
Publish findings as an aspect to Dataplex Universal Catalog. |
deidentify |
Create a de-identified copy of the input data. |
job_notification_emails |
Sends an email when the job completes. The email goes to IAM project owners and technical Essential Contacts. |
publish_to_stackdriver |
Enable Stackdriver metric dlp.googleapis.com/finding_count. |
Deidentify
Create a de-identified copy of a storage bucket. Only compatible with Cloud Storage buckets.
A TransformationDetail will be created for each transformation.
Compatible with: Inspection of Cloud Storage
Fields | |
---|---|
transformation_config |
User specified deidentify templates and configs for structured, unstructured, and image files. |
transformation_details_storage_config |
Config for storing transformation details. This field specifies the configuration for storing detailed metadata about each transformation performed during a de-identification process. The metadata is stored separately from the de-identified content itself and provides a granular record of both successful transformations and any failures that occurred. Enabling this configuration is essential for users who need to access comprehensive information about the status, outcome, and specifics of each transformation. The details are captured in the Key use cases:
To take advantage of these benefits, set this configuration. The stored details include a description of the transformation, success or error codes, error messages, the number of bytes transformed, the location of the transformed content, and identifiers for the job and source data. |
file_types_to_transform[] |
List of user-specified file type groups to transform. If specified, only the files with these file types are transformed. If empty, all supported files are transformed. Supported types may be automatically added over time. Any unsupported file types that are set in this field are excluded from de-identification. An error is recorded for each unsupported file in the TransformationDetails output table. Currently the only file types supported are: IMAGES, TEXT_FILES, CSV, TSV. |
Union field output . Where to store the output. output can be only one of the following: |
|
cloud_storage_output |
Required. User settable Cloud Storage bucket and folders to store de-identified files. This field must be set for Cloud Storage deidentification. The output Cloud Storage bucket must be different from the input bucket. De-identified files will overwrite files in the output path. Form of: gs://bucket/folder/ or gs://bucket |
JobNotificationEmails
This type has no fields.
Sends an email when the job completes. The email goes to IAM project owners and technical Essential Contacts.
PublishFindingsToCloudDataCatalog
This type has no fields.
Publish findings of a DlpJob to Data Catalog. In Data Catalog, tag templates are applied to the resource that Cloud DLP scanned. Data Catalog tag templates are stored in the same project and region where the BigQuery table exists. For Cloud DLP to create and apply the tag template, the Cloud DLP service agent must have the roles/datacatalog.tagTemplateOwner
permission on the project. The tag template contains fields summarizing the results of the DlpJob. Any field values previously written by another DlpJob are deleted. InfoType naming patterns
are strictly enforced when using this feature.
Findings are persisted in Data Catalog storage and are governed by service-specific policies for Data Catalog. For more information, see Service Specific Terms.
Only a single instance of this action can be specified. This action is allowed only if all resources being scanned are BigQuery tables. Compatible with: Inspect
PublishFindingsToDataplexCatalog
This type has no fields.
Publish findings of a DlpJob to Dataplex Universal Catalog as a sensitive-data-protection-job-result
aspect. For more information, see Send inspection results to Dataplex Universal Catalog as aspects.
Aspects are stored in Dataplex Universal Catalog storage and are governed by service-specific policies for Dataplex Universal Catalog. For more information, see Service Specific Terms.
Only a single instance of this action can be specified. This action is allowed only if all resources being scanned are BigQuery tables. Compatible with: Inspect
PublishSummaryToCscc
This type has no fields.
Publish the result summary of a DlpJob to Security Command Center. This action is available for only projects that belong to an organization. This action publishes the count of finding instances and their infoTypes. The summary of findings are persisted in Security Command Center and are governed by service-specific policies for Security Command Center. Only a single instance of this action can be specified. Compatible with: Inspect
PublishToPubSub
Publish a message into a given Pub/Sub topic when DlpJob has completed. The message contains a single field, DlpJobName
, which is equal to the finished job's DlpJob.name
. Compatible with: Inspect, Risk
Fields | |
---|---|
topic |
Cloud Pub/Sub topic to send notifications to. The topic must have given publishing access rights to the DLP API service account executing the long running DlpJob sending the notifications. Format is projects/{project}/topics/{topic}. |
PublishToStackdriver
This type has no fields.
Enable Stackdriver metric dlp.googleapis.com/finding_count. This will publish a metric to stack driver on each infotype requested and how many findings were found for it. CustomDetectors will be bucketed as 'Custom' under the Stackdriver label 'info_type'.
SaveFindings
If set, the detailed findings will be persisted to the specified OutputStorageConfig. Only a single instance of this action can be specified. Compatible with: Inspect, Risk
Fields | |
---|---|
output_config |
Location to store findings outside of DLP. |
ActionDetails
The results of an Action
.
Fields | |
---|---|
Union field details . Summary of what occurred in the actions. details can be only one of the following: |
|
deidentify_details |
Outcome of a de-identification action. |
ActivateJobTriggerRequest
Request message for ActivateJobTrigger.
Fields | |
---|---|
name |
Required. Resource name of the trigger to activate, for example Authorization requires one or more of the following IAM permissions on the specified resource
|
AllOtherDatabaseResources
This type has no fields.
Match database resources not covered by any other filter.
AllOtherResources
This type has no fields.
Match discovery resources not covered by any other filter.
AmazonS3Bucket
Amazon S3 bucket.
Fields | |
---|---|
aws_account |
The AWS account. |
bucket_name |
Required. The bucket name. |
AmazonS3BucketConditions
Amazon S3 bucket conditions.
Fields | |
---|---|
bucket_types[] |
Optional. Bucket types that should be profiled. Optional. Defaults to TYPE_ALL_SUPPORTED if unspecified. |
object_storage_classes[] |
Optional. Object classes that should be profiled. Optional. Defaults to ALL_SUPPORTED_CLASSES if unspecified. |
BucketType
Supported Amazon S3 bucket types. Defaults to TYPE_ALL_SUPPORTED.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Unused. |
TYPE_ALL_SUPPORTED |
All supported classes. |
TYPE_GENERAL_PURPOSE |
A general purpose Amazon S3 bucket. |
ObjectStorageClass
Supported Amazon S3 object storage classes. Defaults to ALL_SUPPORTED_CLASSES.
Enums | |
---|---|
UNSPECIFIED |
Unused. |
ALL_SUPPORTED_CLASSES |
All supported classes. |
STANDARD |
Standard object class. |
STANDARD_INFREQUENT_ACCESS |
Standard - infrequent access object class. |
GLACIER_INSTANT_RETRIEVAL |
Glacier - instant retrieval object class. |
INTELLIGENT_TIERING |
Objects in the S3 Intelligent-Tiering access tiers. |
AmazonS3BucketRegex
Amazon S3 bucket regex.
Fields | |
---|---|
aws_account_regex |
The AWS account regex. |
bucket_name_regex |
Optional. Regex to test the bucket name against. If empty, all buckets match. |
AnalyzeDataSourceRiskDetails
Result of a risk analysis operation request.
Fields | |
---|---|
requested_privacy_metric |
Privacy metric to compute. |
requested_source_table |
Input dataset to compute metrics over. |
requested_options |
The configuration used for this job. |
Union field result . Values associated with this metric. result can be only one of the following: |
|
numerical_stats_result |
Numerical stats result |
categorical_stats_result |
Categorical stats result |
k_anonymity_result |
K-anonymity result |
l_diversity_result |
L-divesity result |
k_map_estimation_result |
K-map result |
delta_presence_estimation_result |
Delta-presence result |
CategoricalStatsResult
Result of the categorical stats computation.
Fields | |
---|---|
value_frequency_histogram_buckets[] |
Histogram of value frequencies in the column. |
CategoricalStatsHistogramBucket
Histogram of value frequencies in the column.
Fields | |
---|---|
value_frequency_lower_bound |
Lower bound on the value frequency of the values in this bucket. |
value_frequency_upper_bound |
Upper bound on the value frequency of the values in this bucket. |
bucket_size |
Total number of values in this bucket. |
bucket_values[] |
Sample of value frequencies in this bucket. The total number of values returned per bucket is capped at 20. |
bucket_value_count |
Total number of distinct values in this bucket. |
DeltaPresenceEstimationResult
Result of the δ-presence computation. Note that these results are an estimation, not exact values.
Fields | |
---|---|
delta_presence_estimation_histogram[] |
The intervals [min_probability, max_probability) do not overlap. If a value doesn't correspond to any such interval, the associated frequency is zero. For example, the following records: {min_probability: 0, max_probability: 0.1, frequency: 17} {min_probability: 0.2, max_probability: 0.3, frequency: 42} {min_probability: 0.3, max_probability: 0.4, frequency: 99} mean that there are no record with an estimated probability in [0.1, 0.2) nor larger or equal to 0.4. |
DeltaPresenceEstimationHistogramBucket
A DeltaPresenceEstimationHistogramBucket message with the following values: min_probability: 0.1 max_probability: 0.2 frequency: 42 means that there are 42 records for which δ is in [0.1, 0.2). An important particular case is when min_probability = max_probability = 1: then, every individual who shares this quasi-identifier combination is in the dataset.
Fields | |
---|---|
min_probability |
Between 0 and 1. |
max_probability |
Always greater than or equal to min_probability. |
bucket_size |
Number of records within these probability bounds. |
bucket_values[] |
Sample of quasi-identifier tuple values in this bucket. The total number of classes returned per bucket is capped at 20. |
bucket_value_count |
Total number of distinct quasi-identifier tuple values in this bucket. |
DeltaPresenceEstimationQuasiIdValues
A tuple of values for the quasi-identifier columns.
Fields | |
---|---|
quasi_ids_values[] |
The quasi-identifier values. |
estimated_probability |
The estimated probability that a given individual sharing these quasi-identifier values is in the dataset. This value, typically called δ, is the ratio between the number of records in the dataset with these quasi-identifier values, and the total number of individuals (inside and outside the dataset) with these quasi-identifier values. For example, if there are 15 individuals in the dataset who share the same quasi-identifier values, and an estimated 100 people in the entire population with these values, then δ is 0.15. |
KAnonymityResult
Result of the k-anonymity computation.
Fields | |
---|---|
equivalence_class_histogram_buckets[] |
Histogram of k-anonymity equivalence classes. |
KAnonymityEquivalenceClass
The set of columns' values that share the same ldiversity value
Fields | |
---|---|
quasi_ids_values[] |
Set of values defining the equivalence class. One value per quasi-identifier column in the original KAnonymity metric message. The order is always the same as the original request. |
equivalence_class_size |
Size of the equivalence class, for example number of rows with the above set of values. |
KAnonymityHistogramBucket
Histogram of k-anonymity equivalence classes.
Fields | |
---|---|
equivalence_class_size_lower_bound |
Lower bound on the size of the equivalence classes in this bucket. |
equivalence_class_size_upper_bound |
Upper bound on the size of the equivalence classes in this bucket. |
bucket_size |
Total number of equivalence classes in this bucket. |
bucket_values[] |
Sample of equivalence classes in this bucket. The total number of classes returned per bucket is capped at 20. |
bucket_value_count |
Total number of distinct equivalence classes in this bucket. |
KMapEstimationResult
Result of the reidentifiability analysis. Note that these results are an estimation, not exact values.
Fields | |
---|---|
k_map_estimation_histogram[] |
The intervals [min_anonymity, max_anonymity] do not overlap. If a value doesn't correspond to any such interval, the associated frequency is zero. For example, the following records: {min_anonymity: 1, max_anonymity: 1, frequency: 17} {min_anonymity: 2, max_anonymity: 3, frequency: 42} {min_anonymity: 5, max_anonymity: 10, frequency: 99} mean that there are no record with an estimated anonymity of 4, 5, or larger than 10. |
KMapEstimationHistogramBucket
A KMapEstimationHistogramBucket message with the following values: min_anonymity: 3 max_anonymity: 5 frequency: 42 means that there are 42 records whose quasi-identifier values correspond to 3, 4 or 5 people in the overlying population. An important particular case is when min_anonymity = max_anonymity = 1: the frequency field then corresponds to the number of uniquely identifiable records.
Fields | |
---|---|
min_anonymity |
Always positive. |
max_anonymity |
Always greater than or equal to min_anonymity. |
bucket_size |
Number of records within these anonymity bounds. |
bucket_values[] |
Sample of quasi-identifier tuple values in this bucket. The total number of classes returned per bucket is capped at 20. |
bucket_value_count |
Total number of distinct quasi-identifier tuple values in this bucket. |
KMapEstimationQuasiIdValues
A tuple of values for the quasi-identifier columns.
Fields | |
---|---|
quasi_ids_values[] |
The quasi-identifier values. |
estimated_anonymity |
The estimated anonymity for these quasi-identifier values. |
LDiversityResult
Result of the l-diversity computation.
Fields | |
---|---|
sensitive_value_frequency_histogram_buckets[] |
Histogram of l-diversity equivalence class sensitive value frequencies. |
LDiversityEquivalenceClass
The set of columns' values that share the same ldiversity value.
Fields | |
---|---|
quasi_ids_values[] |
Quasi-identifier values defining the k-anonymity equivalence class. The order is always the same as the original request. |
equivalence_class_size |
Size of the k-anonymity equivalence class. |
num_distinct_sensitive_values |
Number of distinct sensitive values in this equivalence class. |
top_sensitive_values[] |
Estimated frequencies of top sensitive values. |
LDiversityHistogramBucket
Histogram of l-diversity equivalence class sensitive value frequencies.
Fields | |
---|---|
sensitive_value_frequency_lower_bound |
Lower bound on the sensitive value frequencies of the equivalence classes in this bucket. |
sensitive_value_frequency_upper_bound |
Upper bound on the sensitive value frequencies of the equivalence classes in this bucket. |
bucket_size |
Total number of equivalence classes in this bucket. |
bucket_values[] |
Sample of equivalence classes in this bucket. The total number of classes returned per bucket is capped at 20. |
bucket_value_count |
Total number of distinct equivalence classes in this bucket. |
NumericalStatsResult
Result of the numerical stats computation.
Fields | |
---|---|
min_value |
Minimum value appearing in the column. |
max_value |
Maximum value appearing in the column. |
quantile_values[] |
List of 99 values that partition the set of field values into 100 equal sized buckets. |
RequestedRiskAnalysisOptions
Risk analysis options.
Fields | |
---|---|
job_config |
The job config for the risk job. |
AwsAccount
AWS account.
Fields | |
---|---|
account_id |
Required. AWS account ID. |
AwsAccountRegex
AWS account regex.
Fields | |
---|---|
account_id_regex |
Optional. Regex to test the AWS account ID against. If empty, all accounts match. |
BigQueryDiscoveryTarget
Target used to match against for discovery with BigQuery tables
Fields | |
---|---|
filter |
Required. The tables the discovery cadence applies to. The first target with a matching filter will be the one to apply to a table. |
conditions |
In addition to matching the filter, these conditions must be true before a profile is generated. |
Union field frequency . The generation rule includes the logic on how frequently to update the data profiles. If not specified, discovery will re-run and update no more than once a month if new columns appear in the table. frequency can be only one of the following: |
|
cadence |
How often and when to update profiles. New tables that match both the filter and conditions are scanned as quickly as possible depending on system capacity. |
disabled |
Tables that match this filter will not have profiles created. |
BigQueryField
Message defining a field of a BigQuery table.
Fields | |
---|---|
table |
Source table of the field. |
field |
Designated field in the BigQuery table. |
BigQueryKey
Row key for identifying a record in BigQuery table.
Fields | |
---|---|
table_reference |
Complete BigQuery table reference. |
row_number |
Row number inferred at the time the table was scanned. This value is nondeterministic, cannot be queried, and may be null for inspection jobs. To locate findings within a table, specify |
BigQueryOptions
Options defining BigQuery table and row identifiers.
Fields | |
---|---|
table_reference |
Complete BigQuery table reference. |
identifying_fields[] |
Table fields that may uniquely identify a row within the table. When |
rows_limit |
Max number of rows to scan. If the table has more rows than this value, the rest of the rows are omitted. If not set, or if set to 0, all rows will be scanned. Only one of rows_limit and rows_limit_percent can be specified. Cannot be used in conjunction with TimespanConfig. |
rows_limit_percent |
Max percentage of rows to scan. The rest are omitted. The number of rows scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. Only one of rows_limit and rows_limit_percent can be specified. Cannot be used in conjunction with TimespanConfig. Caution: A known issue is causing the |
sample_method |
How to sample the data. |
excluded_fields[] |
References to fields excluded from scanning. This allows you to skip inspection of entire columns which you know have no findings. When inspecting a table, we recommend that you inspect all columns. Otherwise, findings might be affected because hints from excluded columns will not be used. |
included_fields[] |
Limit scanning only to these fields. When inspecting a table, we recommend that you inspect all columns. Otherwise, findings might be affected because hints from excluded columns will not be used. |
SampleMethod
How to sample rows if not all rows are scanned. Meaningful only when used in conjunction with either rows_limit or rows_limit_percent. If not specified, rows are scanned in the order BigQuery reads them.
Enums | |
---|---|
SAMPLE_METHOD_UNSPECIFIED |
No sampling. |
TOP |
Scan groups of rows in the order BigQuery provides (default). Multiple groups of rows may be scanned in parallel, so results may not appear in the same order the rows are read. |
RANDOM_START |
Randomly pick groups of rows to scan. |
BigQueryRegex
A pattern to match against one or more tables, datasets, or projects that contain BigQuery tables. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.
Fields | |
---|---|
project_id_regex |
For organizations, if unset, will match all projects. Has no effect for data profile configurations created within a project. |
dataset_id_regex |
If unset, this property matches all datasets. |
table_id_regex |
If unset, this property matches all tables. |
BigQueryRegexes
A collection of regular expressions to determine what tables to match against.
Fields | |
---|---|
patterns[] |
A single BigQuery regular expression pattern to match against one or more tables, datasets, or projects that contain BigQuery tables. |
BigQuerySchemaModification
Attributes evaluated to determine if a schema has been modified. New values may be added at a later time.
Enums | |
---|---|
SCHEMA_MODIFICATION_UNSPECIFIED |
Unused |
SCHEMA_NEW_COLUMNS |
Profiles should be regenerated when new columns are added to the table. Default. |
SCHEMA_REMOVED_COLUMNS |
Profiles should be regenerated when columns are removed from the table. |
BigQueryTable
Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: <project_id>:<dataset_id>.<table_id>
or <project_id>.<dataset_id>.<table_id>
.
Fields | |
---|---|
project_id |
The Google Cloud project ID of the project containing the table. If omitted, project ID is inferred from the API call. |
dataset_id |
Dataset ID of the table. |
table_id |
Name of the table. |
BigQueryTableCollection
Specifies a collection of BigQuery tables. Used for Discovery.
Fields | |
---|---|
Union field pattern . Maximum of 100 entries. The first filter containing a pattern that matches a table will be used. pattern can be only one of the following: |
|
include_regexes |
A collection of regular expressions to match a BigQuery table against. |
BigQueryTableModification
Attributes evaluated to determine if a table has been modified. New values may be added at a later time.
Enums | |
---|---|
TABLE_MODIFICATION_UNSPECIFIED |
Unused. |
TABLE_MODIFIED_TIMESTAMP |
A table will be considered modified when the last_modified_time from BigQuery has been updated. |
BigQueryTableType
Over time new types may be added. Currently VIEW, MATERIALIZED_VIEW, and non-BigLake external tables are not supported.
Enums | |
---|---|
BIG_QUERY_TABLE_TYPE_UNSPECIFIED |
Unused. |
BIG_QUERY_TABLE_TYPE_TABLE |
A normal BigQuery table. |
BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE |
A table that references data stored in Cloud Storage. |
BIG_QUERY_TABLE_TYPE_SNAPSHOT |
A snapshot of a BigQuery table. |
BigQueryTableTypeCollection
Over time new types may be added. Currently VIEW, MATERIALIZED_VIEW, and non-BigLake external tables are not supported.
Enums | |
---|---|
BIG_QUERY_COLLECTION_UNSPECIFIED |
Unused. |
BIG_QUERY_COLLECTION_ALL_TYPES |
Automatically generate profiles for all tables, even if the table type is not yet fully supported for analysis. Profiles for unsupported tables will be generated with errors to indicate their partial support. When full support is added, the tables will automatically be profiled during the next scheduled run. |
BIG_QUERY_COLLECTION_ONLY_SUPPORTED_TYPES |
Only those types fully supported will be profiled. Will expand automatically as Cloud DLP adds support for new table types. Unsupported table types will not have partial profiles generated. |
BigQueryTableTypes
The types of BigQuery tables supported by Cloud DLP.
Fields | |
---|---|
types[] |
A set of BigQuery table types. |
BoundingBox
Bounding box encompassing detected text within an image.
Fields | |
---|---|
top |
Top coordinate of the bounding box. (0,0) is upper left. |
left |
Left coordinate of the bounding box. (0,0) is upper left. |
width |
Width of the bounding box in pixels. |
height |
Height of the bounding box in pixels. |
BucketingConfig
Generalization function that buckets values based on ranges. The ranges and replacement values are dynamically provided by the user for custom behavior, such as 1-30 -> LOW, 31-65 -> MEDIUM, 66-100 -> HIGH.
This can be used on data of type: number, long, string, timestamp.
If the bound Value
type differs from the type of data being transformed, we will first attempt converting the type of the data to be transformed to match the type of the bound before comparing. See https://cloud.google.com/sensitive-data-protection/docs/concepts-bucketing to learn more.
Fields | |
---|---|
buckets[] |
Set of buckets. Ranges must be non-overlapping. |
Bucket
Bucket is represented as a range, along with replacement values.
Fields | |
---|---|
min |
Lower bound of the range, inclusive. Type should be the same as max if used. |
max |
Upper bound of the range, exclusive; type must match min. |
replacement_value |
Required. Replacement value for this bucket. |
ByteContentItem
Container for bytes to inspect or redact.
Fields | |
---|---|
type |
The type of data stored in the bytes string. Default will be TEXT_UTF8. |
data |
Content data to inspect or redact. |
BytesType
The type of data being sent for inspection. To learn more, see Supported file types.
Only the first frame of each multiframe image is inspected. Metadata and other frames aren't inspected.
Enums | |
---|---|
BYTES_TYPE_UNSPECIFIED |
Unused |
IMAGE |
Any image type. |
IMAGE_JPEG |
jpeg |
IMAGE_BMP |
bmp |
IMAGE_PNG |
png |
IMAGE_SVG |
svg |
TEXT_UTF8 |
plain text |
WORD_DOCUMENT |
docx, docm, dotx, dotm |
PDF |
|
POWERPOINT_DOCUMENT |
pptx, pptm, potx, potm, pot |
EXCEL_DOCUMENT |
xlsx, xlsm, xltx, xltm |
AVRO |
avro |
CSV |
csv |
TSV |
tsv |
AUDIO |
Audio file types. Only used for profiling. |
VIDEO |
Video file types. Only used for profiling. |
EXECUTABLE |
Executable file types. Only used for profiling. |
AI_MODEL |
AI model file types. Only used for profiling. |
CancelDlpJobRequest
The request message for canceling a DLP job.
Fields | |
---|---|
name |
Required. The name of the DlpJob resource to be cancelled. Authorization requires the following IAM permission on the specified resource
|
CharacterMaskConfig
Partially mask a string by replacing a given number of characters with a fixed character. Masking can start from the beginning or end of the string. This can be used on data of any type (numbers, longs, and so on) and when de-identifying structured data we'll attempt to preserve the original data's type. (This allows you to take a long like 123 and modify it to a string like **3.
Fields | |
---|---|
masking_character |
Character to use to mask the sensitive values—for example, |
number_to_mask |
Number of characters to mask. If not set, all matching chars will be masked. Skipped characters do not count towards this tally. If
The resulting de-identified string is |
reverse_order |
Mask characters in reverse order. For example, if |
characters_to_ignore[] |
When masking a string, items in this list will be skipped when replacing characters. For example, if the input string is |
CharsToIgnore
Characters to skip when doing deidentification of a value. These will be left alone and skipped.
Fields | |
---|---|
Union field characters . Type of characters to skip. characters can be only one of the following: |
|
characters_to_skip |
Characters to not transform when masking. |
common_characters_to_ignore |
Common characters to not transform when masking. Useful to avoid removing punctuation. |
CommonCharsToIgnore
Convenience enum for indicating common characters to not transform.
Enums | |
---|---|
COMMON_CHARS_TO_IGNORE_UNSPECIFIED |
Unused. |
NUMERIC |
0-9 |
ALPHA_UPPER_CASE |
A-Z |
ALPHA_LOWER_CASE |
a-z |
PUNCTUATION |
US Punctuation, one of !"#$%&'()*+,-./:;<=>?@[]^_`{|}~ |
WHITESPACE |
Whitespace character, one of [ \t\n\x0B\f\r] |
CloudSqlDiscoveryTarget
Target used to match against for discovery with Cloud SQL tables.
Fields | |
---|---|
filter |
Required. The tables the discovery cadence applies to. The first target with a matching filter will be the one to apply to a table. |
conditions |
In addition to matching the filter, these conditions must be true before a profile is generated. |
Union field cadence . Type of schedule. cadence can be only one of the following: |
|
generation_cadence |
How often and when to update profiles. New tables that match both the filter and conditions are scanned as quickly as possible depending on system capacity. |
disabled |
Disable profiling for database resources that match this filter. |
CloudSqlIamCredential
This type has no fields.
Use IAM authentication to connect. This requires the Cloud SQL IAM feature to be enabled on the instance, which is not the default for Cloud SQL. See https://cloud.google.com/sql/docs/postgres/authentication and https://cloud.google.com/sql/docs/mysql/authentication.
CloudSqlProperties
Cloud SQL connection properties.
Fields | |
---|---|
connection_name |
Optional. Immutable. The Cloud SQL instance for which the connection is defined. Only one connection per instance is allowed. This can only be set at creation time, and cannot be updated. It is an error to use a connection_name from different project or region than the one that holds the connection. For example, a Connection resource for Cloud SQL connection_name |
max_connections |
Required. The DLP API will limit its connections to max_connections. Must be 2 or greater. |
database_engine |
Required. The database engine used by the Cloud SQL instance that this connection configures. |
Union field credential . How to authenticate to the instance. credential can be only one of the following: |
|
username_password |
A username and password stored in Secret Manager. |
cloud_sql_iam |
Built-in IAM authentication (must be configured in Cloud SQL). |
DatabaseEngine
Database engine of a Cloud SQL instance. New values may be added over time.
Enums | |
---|---|
DATABASE_ENGINE_UNKNOWN |
An engine that is not currently supported by Sensitive Data Protection. |
DATABASE_ENGINE_MYSQL |
Cloud SQL for MySQL instance. |
DATABASE_ENGINE_POSTGRES |
Cloud SQL for PostgreSQL instance. |
CloudStorageDiscoveryTarget
Target used to match against for discovery with Cloud Storage buckets.
Fields | |
---|---|
filter |
Required. The buckets the generation_cadence applies to. The first target with a matching filter will be the one to apply to a bucket. |
conditions |
Optional. In addition to matching the filter, these conditions must be true before a profile is generated. |
Union field cadence . How often and when to update profiles. cadence can be only one of the following: |
|
generation_cadence |
Optional. How often and when to update profiles. New buckets that match both the filter and conditions are scanned as quickly as possible depending on system capacity. |
disabled |
Optional. Disable profiling for buckets that match this filter. |
CloudStorageFileSet
Message representing a set of files in Cloud Storage.
Fields | |
---|---|
url |
The url, in the format |
CloudStorageOptions
Options defining a file or a set of files within a Cloud Storage bucket.
Fields | |
---|---|
file_set |
The set of one or more files to scan. |
bytes_limit_per_file |
Max number of bytes to scan from a file. If a scanned file's size is bigger than this value then the rest of the bytes are omitted. Only one of |
bytes_limit_per_file_percent |
Max percentage of bytes to scan from a file. The rest are omitted. The number of bytes scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. Only one of bytes_limit_per_file and bytes_limit_per_file_percent can be specified. This field can't be set if de-identification is requested. For certain file types, setting this field has no effect. For more information, see Limits on bytes scanned per file. |
file_types[] |
List of file type groups to include in the scan. If empty, all files are scanned and available data format processors are applied. In addition, the binary content of the selected files is always scanned as well. Images are scanned only as binary if the specified region does not support image inspection and no file_types were specified. Image inspection is restricted to 'global', 'us', 'asia', and 'europe'. |
sample_method |
How to sample the data. |
files_limit_percent |
Limits the number of files to scan to this percentage of the input FileSet. Number of files scanned is rounded down. Must be between 0 and 100, inclusively. Both 0 and 100 means no limit. Defaults to 0. |
FileSet
Set of files to scan.
Fields | |
---|---|
url |
The Cloud Storage url of the file(s) to scan, in the format If the url ends in a trailing slash, the bucket or directory represented by the url will be scanned non-recursively (content in sub-directories will not be scanned). This means that Exactly one of |
regex_file_set |
The regex-filtered set of files to scan. Exactly one of |
SampleMethod
How to sample bytes if not all bytes are scanned. Meaningful only when used in conjunction with bytes_limit_per_file. If not specified, scanning would start from the top.
Enums | |
---|---|
SAMPLE_METHOD_UNSPECIFIED |
No sampling. |
TOP |
Scan from the top (default). |
RANDOM_START |
For each file larger than bytes_limit_per_file, randomly pick the offset to start scanning. The scanned bytes are contiguous. |
CloudStoragePath
Message representing a single file or path in Cloud Storage.
Fields | |
---|---|
path |
A URL representing a file or path (no wildcards) in Cloud Storage. Example: |
CloudStorageRegex
A pattern to match against one or more file stores. At least one pattern must be specified. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub.
Fields | |
---|---|
project_id_regex |
Optional. For organizations, if unset, will match all projects. |
bucket_name_regex |
Optional. Regex to test the bucket name against. If empty, all buckets match. Example: "marketing2021" or "(marketing)\d{4}" will both match the bucket gs://marketing2021 |
CloudStorageRegexFileSet
Message representing a set of files in a Cloud Storage bucket. Regular expressions are used to allow fine-grained control over which files in the bucket to include.
Included files are those that match at least one item in include_regex
and do not match any items in exclude_regex
. Note that a file that matches items from both lists will not be included. For a match to occur, the entire file path (i.e., everything in the url after the bucket name) must match the regular expression.
For example, given the input {bucket_name: "mybucket", include_regex:
["directory1/.*"], exclude_regex:
["directory1/excluded.*"]}
:
gs://mybucket/directory1/myfile
will be includedgs://mybucket/directory1/directory2/myfile
will be included (.*
matches across/
)gs://mybucket/directory0/directory1/myfile
will not be included (the full path doesn't match any items ininclude_regex
)gs://mybucket/directory1/excludedfile
will not be included (the path matches an item inexclude_regex
)
If include_regex
is left empty, it will match all files by default (this is equivalent to setting include_regex: [".*"]
).
Some other common use cases:
{bucket_name: "mybucket", exclude_regex: [".*\.pdf"]}
will include all files inmybucket
except for .pdf files{bucket_name: "mybucket", include_regex: ["directory/[^/]+"]}
will include all files directly undergs://mybucket/directory/
, without matching across/
Fields | |
---|---|
bucket_name |
The name of a Cloud Storage bucket. Required. |
include_regex[] |
A list of regular expressions matching file paths to include. All files in the bucket that match at least one of these regular expressions will be included in the set of files, except for those that also match an item in Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub. |
exclude_regex[] |
A list of regular expressions matching file paths to exclude. All files in the bucket that match at least one of these regular expressions will be excluded from the scan. Regular expressions use RE2 syntax; a guide can be found under the google/re2 repository on GitHub. |
CloudStorageResourceReference
Identifies a single Cloud Storage bucket.
Fields | |
---|---|
bucket_name |
Required. The bucket to scan. |
project_id |
Required. If within a project-level config, then this must match the config's project id. |
Color
Represents a color in the RGB color space.
Fields | |
---|---|
red |
The amount of red in the color as a value in the interval [0, 1]. |
green |
The amount of green in the color as a value in the interval [0, 1]. |
blue |
The amount of blue in the color as a value in the interval [0, 1]. |
ColumnDataProfile
The profile for a scanned column within a table.
Fields | |
---|---|
name |
The name of the profile. |
profile_status |
Success or error status from the most recent profile generation attempt. May be empty if the profile is still being generated. |
state |
State of a profile. |
profile_last_generated |
The last time the profile was generated. |
table_data_profile |
The resource name of the table data profile. |
table_full_resource |
The resource name of the resource this column is within. |
dataset_project_id |
The Google Cloud project ID that owns the profiled resource. |
dataset_location |
If supported, the location where the dataset's data is stored. See https://cloud.google.com/bigquery/docs/locations for supported BigQuery locations. |
dataset_id |
The BigQuery dataset ID, if the resource profiled is a BigQuery table. |
table_id |
The table ID. |
column |
The name of the column. |
sensitivity_score |
The sensitivity of this column. |
data_risk_level |
The data risk level for this column. |
column_info_type |
If it's been determined this column can be identified as a single type, this will be set. Otherwise the column either has unidentifiable content or mixed types. |
other_matches[] |
Other types found within this column. List will be unordered. |
estimated_null_percentage |
Approximate percentage of entries being null in the column. |
estimated_uniqueness_score |
Approximate uniqueness of the column. |
free_text_score |
The likelihood that this column contains free-form text. A value close to 1 may indicate the column is likely to contain free-form or natural language text. Range in 0-1. |
column_type |
The data type of a given column. |
policy_state |
Indicates if a policy tag has been applied to the column. |
ColumnDataType
Data types of the data in a column. Types may be added over time.
Enums | |
---|---|
COLUMN_DATA_TYPE_UNSPECIFIED |
Invalid type. |
TYPE_INT64 |
Encoded as a string in decimal format. |
TYPE_BOOL |
Encoded as a boolean "false" or "true". |
TYPE_FLOAT64 |
Encoded as a number, or string "NaN", "Infinity" or "-Infinity". |
TYPE_STRING |
Encoded as a string value. |
TYPE_BYTES |
Encoded as a base64 string per RFC 4648, section 4. |
TYPE_TIMESTAMP |
Encoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z |
TYPE_DATE |
Encoded as RFC 3339 full-date format string: 1985-04-12 |
TYPE_TIME |
Encoded as RFC 3339 partial-time format string: 23:20:50.52 |
TYPE_DATETIME |
Encoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52 |
TYPE_GEOGRAPHY |
Encoded as WKT |
TYPE_NUMERIC |
Encoded as a decimal string. |
TYPE_RECORD |
Container of ordered fields, each with a type and field name. |
TYPE_BIGNUMERIC |
Decimal type. |
TYPE_JSON |
Json type. |
TYPE_INTERVAL |
Interval type. |
TYPE_RANGE_DATE |
Range<Date> type. |
TYPE_RANGE_DATETIME |
Range<Datetime> type. |
TYPE_RANGE_TIMESTAMP |
Range<Timestamp> type. |
ColumnPolicyState
The possible policy states for a column.
Enums | |
---|---|
COLUMN_POLICY_STATE_UNSPECIFIED |
No policy tags. |
COLUMN_POLICY_TAGGED |
Column has policy tag applied. |
State
Possible states of a profile. New items may be added.
Enums | |
---|---|
STATE_UNSPECIFIED |
Unused. |
RUNNING |
The profile is currently running. Once a profile has finished it will transition to DONE. |
DONE |
The profile is no longer generating. If profile_status.status.code is 0, the profile succeeded, otherwise, it failed. |
Connection
A data connection to allow the DLP API to profile data in locations that require additional configuration.
Fields | |
---|---|
name |
Output only. Name of the connection: |
state |
Required. The connection's state in its lifecycle. |
errors[] |
Output only. Set if status == ERROR, to provide additional details. Will store the last 10 errors sorted with the most recent first. |
Union field properties . Type of connection. properties can be only one of the following: |
|
cloud_sql |
Connect to a Cloud SQL instance. |
ConnectionState
State of the connection. New values may be added over time.
Enums | |
---|---|
CONNECTION_STATE_UNSPECIFIED |
Unused |
MISSING_CREDENTIALS |
The DLP API automatically created this connection during an initial scan, and it is awaiting full configuration by a user. |
AVAILABLE |
A configured connection that has not encountered any errors. |
ERROR |
A configured connection that encountered errors during its last use. It will not be used again until it is set to AVAILABLE. If the resolution requires external action, then the client must send a request to set the status to AVAILABLE when the connection is ready for use. If the resolution doesn't require external action, then any changes to the connection properties will automatically mark it as AVAILABLE. |
Container
Represents a container that may contain DLP findings. Examples of a container include a file, table, or database record.
Fields | |
---|---|
type |
Container type, for example BigQuery or Cloud Storage. |
project_id |
Project where the finding was found. Can be different from the project that owns the finding. |
full_path |
A string representation of the full container name. Examples: - BigQuery: 'Project:DataSetId.TableId' - Cloud Storage: 'gs://Bucket/folders/filename.txt' |
root_path |
The root of the container. Examples:
|
relative_path |
The rest of the path after the root. Examples:
|
update_time |
Findings container modification timestamp, if applicable. For Cloud Storage, this field contains the last file modification timestamp. For a BigQuery table, this field contains the last_modified_time property. For Datastore, this field isn't populated. |
version |
Findings container version, if available ("generation" for Cloud Storage). |
ContentItem
Type of content to inspect.
Fields | |
---|---|
Union field data_item . Data of the item either in the byte array or UTF-8 string form, or table. data_item can be only one of the following: |
|
value |
String data to inspect or redact. |
table |
Structured content for inspection. See https://cloud.google.com/sensitive-data-protection/docs/inspecting-text#inspecting_a_table to learn more. |
byte_item |
Content data to inspect or redact. Replaces |
ContentLocation
Precise location of the finding within a document, record, image, or metadata container.
Fields | |
---|---|
container_name |
Name of the container where the finding is located. The top level name is the source file name or table name. Names of some common storage containers are formatted as follows:
Nested names could be absent if the embedded object has no string identifier (for example, an image contained within a document). |
container_timestamp |
Finding container modification timestamp, if applicable. For Cloud Storage, this field contains the last file modification timestamp. For a BigQuery table, this field contains the last_modified_time property. For Datastore, this field isn't populated. |
container_version |
Finding container version, if available ("generation" for Cloud Storage). |
Union field location . Type of the container within the file with location of the finding. location can be only one of the following: |
|
record_location |
Location within a row or record of a database table. |
image_location |
Location within an image's pixels. |
document_location |
Location data for document files. |
metadata_location |
Location within the metadata for inspected content. |
ContentOption
Deprecated and unused.
Enums | |
---|---|
CONTENT_UNSPECIFIED |
Includes entire content of a file or a data stream. |
CONTENT_TEXT |
Text content within the data, excluding any metadata. |
CONTENT_IMAGE |
Images found in the data. |
CreateConnectionRequest
Request message for CreateConnection.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization):
Authorization requires the following IAM permission on the specified resource
|
connection |
Required. The connection resource. |
CreateDeidentifyTemplateRequest
Request message for CreateDeidentifyTemplate.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
deidentify_template |
Required. The DeidentifyTemplate to create. |
template_id |
The template id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_id |
Deprecated. This field has no effect. |
CreateDiscoveryConfigRequest
Request message for CreateDiscoveryConfig.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization):
The following example
Authorization requires the following IAM permission on the specified resource
|
discovery_config |
Required. The DiscoveryConfig to create. |
config_id |
The config ID can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
CreateDlpJobRequest
Request message for CreateDlpJobRequest. Used to initiate long running jobs such as calculating risk metrics or inspecting Google Cloud Storage.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
job_id |
The job id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_id |
Deprecated. This field has no effect. |
Union field job . The configuration details for the specific type of job to run. job can be only one of the following: |
|
inspect_job |
An inspection job scans a storage repository for InfoTypes. |
risk_job |
A risk analysis job calculates re-identification risk metrics for a BigQuery table. |
CreateInspectTemplateRequest
Request message for CreateInspectTemplate.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
inspect_template |
Required. The InspectTemplate to create. |
template_id |
The template id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_id |
Deprecated. This field has no effect. |
CreateJobTriggerRequest
Request message for CreateJobTrigger.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on whether you have specified a processing location:
The following example
Authorization requires one or more of the following IAM permissions on the specified resource
|
job_trigger |
Required. The JobTrigger to create. |
trigger_id |
The trigger id can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_id |
Deprecated. This field has no effect. |
CreateStoredInfoTypeRequest
Request message for CreateStoredInfoType.
Fields | |
---|---|
parent |
Required. Parent resource name. The format of this value varies depending on the scope of the request (project or organization) and whether you have specified a processing location:
The following example
Authorization requires the following IAM permission on the specified resource
|
config |
Required. Configuration of the storedInfoType to create. |
stored_info_type_id |
The storedInfoType ID can contain uppercase and lowercase letters, numbers, and hyphens; that is, it must match the regular expression: |
location_id |
Deprecated. This field has no effect. |
CryptoDeterministicConfig
Pseudonymization method that generates deterministic encryption for the given input. Outputs a base64 encoded representation of the encrypted output. Uses AES-SIV based on the RFC https://tools.ietf.org/html/rfc5297.
Fields | |
---|---|
crypto_key |
The key used by the encryption function. For deterministic encryption using AES-SIV, the provided key is internally expanded to 64 bytes prior to use. |
surrogate_info_type |
The custom info type to annotate the surrogate with. This annotation will be applied to the surrogate by prefixing it with the name of the custom info type followed by the number of characters comprising the surrogate. The following scheme defines the format: {info type name}({surrogate character count}):{surrogate} For example, if the name of custom info type is 'MY_TOKEN_INFO_TYPE' and the surrogate is 'abc', the full replacement value will be: 'MY_TOKEN_INFO_TYPE(3):abc' This annotation identifies the surrogate when inspecting content using the custom info type 'Surrogate'. This facilitates reversal of the surrogate when it occurs in free text. Note: For record transformations where the entire cell in a table is being transformed, surrogates are not mandatory. Surrogates are used to denote the location of the token and are necessary for re-identification in free form text. In order for inspection to work properly, the name of this info type must not occur naturally anywhere in your data; otherwise, inspection may either
Therefore, choose your custom info type name carefully after considering what your data looks like. One way to select a name that has a high chance of yielding reliable detection is to include one or more unicode characters that are highly improbable to exist in your data. For example, assuming your data is entered from a regular ASCII keyboard, the symbol with the hex code point 29DD might be used like so: ⧝MY_TOKEN_TYPE. |
context |
A context may be used for higher security and maintaining referential integrity such that the same identifier in two different contexts will be given a distinct surrogate. The context is appended to plaintext value being encrypted. On decryption the provided context is validated against the value used during encryption. If a context was provided during encryption, same context must be provided during decryption as well. If the context is not set, plaintext would be used as is for encryption. If the context is set but:
plaintext would be used as is for encryption. Note that case (1) is expected when an |
CryptoHashConfig
Pseudonymization method that generates surrogates via cryptographic hashing. Uses SHA-256. The key size must be either 32 or 64 bytes. Outputs a base64 encoded representation of the hashed output (for example, L7k0BHmF1ha5U3NfGykjro4xWi1MPVQPjhMAZbSV9mM=). Currently, only string and integer values can be hashed. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization to learn more.
Fields | |
---|---|
crypto_key |
The key used by the hash function. |
CryptoKey
This is a data encryption key (DEK) (as opposed to a key encryption key (KEK) stored by Cloud Key Management Service (Cloud KMS). When using Cloud KMS to wrap or unwrap a DEK, be sure to set an appropriate IAM policy on the KEK to ensure an attacker cannot unwrap the DEK.
Fields | |
---|---|
Union field source . Sources of crypto keys. source can be only one of the following: |
|
transient |
Transient crypto key |
unwrapped |
Unwrapped crypto key |
kms_wrapped |
Key wrapped using Cloud KMS |
CryptoReplaceFfxFpeConfig
Replaces an identifier with a surrogate using Format Preserving Encryption (FPE) with the FFX mode of operation; however when used in the ReidentifyContent
API method, it serves the opposite function by reversing the surrogate back into the original identifier. The identifier must be encoded as ASCII. For a given crypto key and context, the same identifier will be replaced with the same surrogate. Identifiers must be at least two characters long. In the case that the identifier is the empty string, it will be skipped. See https://cloud.google.com/sensitive-data-protection/docs/pseudonymization to learn more.
Note: We recommend using CryptoDeterministicConfig for all use cases which do not require preserving the input alphabet space and size, plus warrant referential integrity. FPE incurs significant latency costs.
Fields | |
---|---|
crypto_key |
Required. The key used by the encryption algorithm. |
context |
The 'tweak', a context may be used for higher security since the same identifier in two different contexts won't be given the same surrogate. If the context is not set, a default tweak will be used. If the context is set but:
a default tweak will be used. Note that case (1) is expected when an The tweak is constructed as a sequence of bytes in big endian byte order such that:
|
surrogate_info_type |
The custom infoType to annotate the surrogate with. This annotation will be applied to the surrogate by prefixing it with the name of the custom infoType followed by the number of characters comprising the surrogate. The following scheme defines the format: info_type_name(surrogate_character_count):surrogate For example, if the name of custom infoType is 'MY_TOKEN_INFO_TYPE' and the surrogate is 'abc', the full replacement value will be: 'MY_TOKEN_INFO_TYPE(3):abc' This annotation identifies the surrogate when inspecting content using the custom infoType In order for inspection to work properly, the name of this infoType must not occur naturally anywhere in your data; otherwise, inspection may find a surrogate that does not correspond to an actual identifier. Therefore, choose your custom infoType name carefully after considering what your data looks like. One way to select a name that has a high chance of yielding reliable detection is to include one or more unicode characters that are highly improbable to exist in your data. For example, assuming your data is entered from a regular ASCII keyboard, the symbol with the hex code point 29DD might be used like so: ⧝MY_TOKEN_TYPE |
Union field alphabet . Choose an alphabet which the data being transformed will be made up of. alphabet can be only one of the following: |
|
common_alphabet |
Common alphabets. |
custom_alphabet |
This is supported by mapping these to the alphanumeric characters that the FFX mode natively supports. This happens before/after encryption/decryption. Each character listed must appear only once. Number of characters must be in the range [2, 95]. This must be encoded as ASCII. The order of characters does not matter. The full list of allowed characters is: |
radix |
The native way to select the alphabet. Must be in the range [2, 95]. |
FfxCommonNativeAlphabet
These are commonly used subsets of the alphabet that the FFX mode natively supports. In the algorithm, the alphabet is selected using the "radix". Therefore each corresponds to a particular radix.
Enums | |
---|---|
FFX_COMMON_NATIVE_ALPHABET_UNSPECIFIED |
Unused. |
NUMERIC |
[0-9] (radix of 10) |
HEXADECIMAL |
[0-9A-F] (radix of 16) |
UPPER_CASE_ALPHA_NUMERIC |
[0-9A-Z] (radix of 36) |
ALPHA_NUMERIC |
[0-9A-Za-z] (radix of 62) |
CustomInfoType
Custom information type provided by the user. Used to find domain-specific sensitive information configurable to the data in question.
Fields | |
---|---|
info_type |
CustomInfoType can either be a new infoType, or an extension of built-in infoType, when the name matches one of existing infoTypes and that infoType is specified in |
likelihood |
Likelihood to return for this CustomInfoType. This base value can be altered by a detection rule if the finding meets the criteria specified by the rule. Defaults to |
detection_rules[] |
Set of detection rules to apply to all findings of this CustomInfoType. Rules are applied in order that they are specified. Not supported for the |
exclusion_type |
If set to EXCLUSION_TYPE_EXCLUDE this infoType will not cause a finding to be returned. It still can be used for rules matching. |
sensitivity_score |
Sensitivity for this CustomInfoType. If this CustomInfoType extends an existing InfoType, the sensitivity here will take precedence over that of the original InfoType. If unset for a CustomInfoType, it will default to HIGH. This only applies to data profiling. |
Union field type . Type of custom detector. type can be only one of the following: |
|
dictionary |
A list of phrases to detect as a CustomInfoType. |
regex |
Regular expression based CustomInfoType. |
surrogate_type |
Message for detecting output from deidentification transformations that support reversing. |
stored_type |
Load an existing |
DetectionRule
Deprecated; use InspectionRuleSet
instead. Rule for modifying a CustomInfoType
to alter behavior under certain circumstances, depending on the specific details of the rule. Not supported for the surrogate_type
custom infoType.
Fields | |
---|---|
Union field type . Type of hotword rule. type can be only one of the following: |
|
hotword_rule |
Hotword-based detection rule. |
HotwordRule
The rule that adjusts the likelihood of findings within a certain proximity of hotwords.
Fields | |
---|---|
hotword_regex |
Regular expression pattern defining what qualifies as a hotword. |
proximity |
Range of characters within which the entire hotword must reside. The total length of the window cannot exceed 1000 characters. The finding itself will be included in the window, so that hotwords can be used to match substrings of the finding itself. Suppose you want Cloud DLP to promote the likelihood of the phone number regex "(\d{3}) \d{3}-\d{4}" if the area code is known to be the area code of a company's office. In this case, use the hotword regex "(xxx)", where "xxx" is the area code in question. For tabular data, if you want to modify the likelihood of an entire column of findngs, see |