The category this safety setting should be applied to.
threshold
The threshold describing what content should be blocked.
method
The method of computing whether the threshold has been exceeded; if not specified,
the default method is severity for most models. See harm block
methods
in the Google Cloud documentation for more details.
> Note: For models older than gemini-1.5-flash and gemini-1.5-pro, the default method
> is probability.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-17 UTC."],[],[],null,["# FirebaseVertexAI Framework Reference\n\nSafetySetting\n=============\n\n @available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)\n public struct SafetySetting : Sendable\n\n extension SafetySetting: Encodable\n\nA type used to specify a threshold for harmful content, beyond which the model will return a\nfallback response instead of generated content.\n\nSee [safety settings for Gemini\nmodels](https://firebase.google.com/docs/vertex-ai/safety-settings?platform=ios#gemini) for\nmore details.\n- `\n ``\n ``\n `\n\n ### [HarmBlockThreshold](../Structs/SafetySetting/HarmBlockThreshold.html)\n\n `\n ` \n Block at and beyond a specified [HarmProbability](../Structs/SafetyRating/HarmProbability.html). \n\n #### Declaration\n\n Swift \n\n public struct HarmBlockThreshold : EncodableProtoEnum, Sendable\n\n extension ../Structs/SafetySetting.html.HarmBlockThreshold: Encodable\n\n- `\n ``\n ``\n `\n\n ### [HarmBlockMethod](../Structs/SafetySetting/HarmBlockMethod.html)\n\n `\n ` \n The method of computing whether the [HarmBlockThreshold](../Structs/SafetySetting/HarmBlockThreshold.html) has been exceeded. \n\n #### Declaration\n\n Swift \n\n @available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)\n public struct HarmBlockMethod : EncodableProtoEnum, Sendable\n\n- `\n ``\n ``\n `\n\n ### [harmCategory](#/s:16FirebaseVertexAI13SafetySettingV12harmCategoryAA04HarmG0Vvp)\n\n `\n ` \n The category this safety setting should be applied to. \n\n #### Declaration\n\n Swift \n\n public let harmCategory: ../Structs/HarmCategory.html\n\n- `\n ``\n ``\n `\n\n ### [threshold](#/s:16FirebaseVertexAI13SafetySettingV9thresholdAC18HarmBlockThresholdVvp)\n\n `\n ` \n The threshold describing what content should be blocked. \n\n #### Declaration\n\n Swift \n\n public let threshold: ../Structs/SafetySetting/HarmBlockThreshold.html\n\n- `\n ``\n ``\n `\n\n ### [method](#/s:16FirebaseVertexAI13SafetySettingV6methodAC15HarmBlockMethodVSgvp)\n\n `\n ` \n The method of computing whether the [threshold](../Structs/SafetySetting.html#/s:16FirebaseVertexAI13SafetySettingV9thresholdAC18HarmBlockThresholdVvp) has been exceeded. \n\n #### Declaration\n\n Swift \n\n public let method: ../Structs/SafetySetting/HarmBlockMethod.html?\n\n- `\n ``\n ``\n `\n\n ### [init(harmCategory:threshold:method:)](#/s:16FirebaseVertexAI13SafetySettingV12harmCategory9threshold6methodAcA04HarmG0V_AC0J14BlockThresholdVAC0jK6MethodVSgtcfc)\n\n `\n ` \n Initializes a new safety setting with the given category and threshold. \n\n #### Declaration\n\n Swift \n\n public init(harmCategory: ../Structs/HarmCategory.html, threshold: ../Structs/SafetySetting/HarmBlockThreshold.html,\n method: ../Structs/SafetySetting/HarmBlockMethod.html? = nil)\n\n #### Parameters\n\n |----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n | ` `*harmCategory*` ` | The category this safety setting should be applied to. |\n | ` `*threshold*` ` | The threshold describing what content should be blocked. |\n | ` `*method*` ` | The method of computing whether the threshold has been exceeded; if not specified, the default method is [severity](../Structs/SafetySetting/HarmBlockMethod.html#/s:16FirebaseVertexAI13SafetySettingV15HarmBlockMethodV8severityAEvpZ) for most models. See [harm block methods](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-filters#how_to_configure_safety_filters) in the Google Cloud documentation for more details. \\\u003e Note: For models older than `gemini-1.5-flash` and `gemini-1.5-pro`, the default method \\\u003e is [probability](../Structs/SafetySetting/HarmBlockMethod.html#/s:16FirebaseVertexAI13SafetySettingV15HarmBlockMethodV11probabilityAEvpZ). |"]]