com.google.firebase.ai.type
Interfaces
LiveServerMessage |
Parent interface for responses from the model during live interactions. |
Part |
Interface representing data sent to and received from requests. |
Classes
BlockReason |
Describes why content was blocked. |
Candidate |
A |
Citation |
Represents a citation of content from an external source within the model's output. |
CitationMetadata |
A collection of source attributions for a piece of content. |
CodeExecutionResultPart |
|
Content |
Represents content sent to and received from the model. |
Content.Builder |
Builder class to facilitate constructing complex |
ContentModality |
Content part modality. |
CountTokensResponse |
The model's response to a count tokens request. |
Dimensions |
Represents the dimensions of an image in pixels |
ExecutableCodePart |
|
FileDataPart |
Represents file data stored in Cloud Storage for Firebase, referenced by URI. |
FinishReason |
Represents the reason why the model stopped generating content. |
FunctionCallPart |
Represents function call name and params received from requests. |
FunctionCallingConfig |
The configuration that specifies the function calling behavior. |
FunctionDeclaration |
Defines a function that the model can use as a tool. |
FunctionResponsePart |
Represents function call output to be returned to the model when it requests a function call. |
GenerateContentResponse |
A response from the model. |
GenerationConfig |
Configuration parameters to use for content generation. |
GenerationConfig.Builder |
Builder for creating a |
GenerativeBackend |
Represents a reference to a backend for generative AI. |
GoogleSearch |
A tool that allows the generative model to connect to Google Search to access and incorporate up-to-date information from the web into its responses. |
GroundingAttribution |
This class is deprecated. Use GroundingChunk instead |
GroundingChunk |
Represents a chunk of retrieved data that supports a claim in the model's response. |
GroundingMetadata |
Metadata returned to the client when grounding is enabled. |
GroundingSupport |
Provides information about how a specific segment of the model's response is supported by the retrieved grounding chunks. |
HarmBlockMethod |
Specifies how the block method computes the score that will be compared against the |
HarmBlockThreshold |
Represents the threshold for a |
HarmCategory |
Category for a given harm rating. |
HarmProbability |
Represents the probability that some |
HarmSeverity |
Represents the severity of a |
ImagePart |
Represents image data sent to and received from requests. |
ImagenAspectRatio |
Represents the aspect ratio that the generated image should conform to. |
ImagenBackgroundMask |
A generated mask image which will auto-detect and mask out the background. |
ImagenControlReference |
Represents a reference image (provided or generated) to bound the created image via controlled generation. |
ImagenControlType |
Represents a control type for controlled Imagen generation/editing |
ImagenEditMode |
Represents the edit mode for Imagen |
ImagenEditingConfig |
Contains the editing settings which are not specific to a reference image |
ImagenForegroundMask |
A generated mask image which will auto-detect and mask out the foreground. |
ImagenGenerationConfig |
|
ImagenGenerationConfig.Builder |
Builder for creating a |
ImagenGenerationResponse |
Represents a response from a call to |
ImagenImageFormat |
Represents the format an image should be returned in. |
ImagenImagePlacement |
Represents where the placement of an image is within a new, larger image, usually in the context of an outpainting request. |
ImagenInlineImage |
Represents an Imagen-generated image that is returned as inline data. |
ImagenMaskReference |
Represents a mask for Imagen editing. |
ImagenPersonFilterLevel |
A filter used to prevent images from containing depictions of children or people. |
ImagenRawImage |
Represents a base image for Imagen editing |
ImagenRawMask |
Represents a mask for Imagen editing. |
ImagenReferenceImage |
Represents an reference image for an Imagen editing request |
ImagenSafetyFilterLevel |
Used for safety filtering. |
ImagenSafetySettings |
A configuration for filtering unsafe content or images containing people. |
ImagenSemanticMask |
Represents a generated mask for Imagen editing which masks out certain objects using object detection. |
ImagenStyleReference |
A reference image for style transfer |
ImagenSubjectReference |
A reference image for generating an image with a specific subject |
ImagenSubjectReferenceType |
Represents a type for a subject reference, specifying how it should be interpreted. |
InlineDataPart |
Represents binary data with an associated MIME type sent to and received from requests. |
LiveGenerationConfig |
Configuration parameters to use for live content generation. |
LiveGenerationConfig.Builder |
Builder for creating a |
LiveServerContent |
Incremental server update generated by the model in response to client messages. |
LiveServerSetupComplete |
The model is ready to receive client messages. |
LiveServerToolCall |
Request for the client to execute the provided |
LiveServerToolCallCancellation |
Notification for the client to cancel a previous function call from |
LiveSession |
Represents a live WebSocket session capable of streaming content to and from the server. |
MediaData |
Represents the media data to be sent to the server |
ModalityTokenCount |
Represents token counting info for a single modality. |
PromptFeedback |
Feedback on the prompt provided in the request. |
RequestOptions |
Configurable options unique to how requests to the backend are performed. |
ResponseModality |
Represents the type of content present in a response (e.g., text, image, audio). |
SafetyRating |
An assessment of the potential harm of some generated content. |
SafetySetting |
A configuration for a |
Schema |
Definition of a data type. |
SearchEntryPoint |
Represents a Google Search entry point. |
Segment |
Represents a specific segment within a |
SpeechConfig |
Speech configuration class for setting up the voice of the server's response. |
StringFormat |
|
StringFormat.Custom |
|
TextPart |
Represents text or string based data sent to and received from requests. |
ThinkingConfig |
Configuration parameters for thinking features. |
ThinkingConfig.Builder |
|
Tool |
Contains a set of tools (like function declarations) that the model has access to. |
ToolConfig |
Contains configuration for the function calling tools of the model. |
UsageMetadata |
Usage metadata about response(s). |
Voice |
Various voices supported by the server. |
Voices |
This class is deprecated. Use the Voice class instead. |
WebGroundingChunk |
A grounding chunk from the web. |
Exceptions
APINotConfiguredException |
The user's project has not been configured and enabled for the selected API. |
AudioRecordInitializationFailedException |
Audio record initialization failures for audio streaming |
ContentBlockedException |
|
FirebaseAIException |
Parent class for any errors that occur from the |
InvalidAPIKeyException |
The provided API Key is not valid. |
InvalidLocationException |
The specified Vertex AI location is invalid. |
InvalidStateException |
Some form of state occurred that shouldn't have. |
PromptBlockedException |
A request was blocked. |
QuotaExceededException |
The request has hit a quota limit. |
RequestTimeoutException |
A request took too long to complete. |
ResponseStoppedException |
A request was stopped during generation for some reason. |
SerializationException |
Something went wrong while trying to deserialize a response from the server. |
ServerException |
The server responded with a non 200 response code. |
ServiceConnectionHandshakeFailedException |
Handshake failed with the server |
ServiceDisabledException |
The service is not enabled for this Firebase project. |
SessionAlreadyReceivingException |
Streaming session already receiving. |
UnknownException |
Catch all case for exceptions not explicitly expected. |
UnsupportedUserLocationException |
The user's location (region) is not supported by the API. |
Annotations
Top-level functions summary
Content |
content(role: String?, init: Content.Builder.() -> Unit) Function to build a new |
GenerationConfig |
generationConfig(init: GenerationConfig.Builder.() -> Unit) Helper method to construct a |
ImagenGenerationConfig |
Helper method to construct a |
LiveGenerationConfig |
liveGenerationConfig(init: LiveGenerationConfig.Builder.() -> Unit) Helper method to construct a |
ThinkingConfig |
thinkingConfig(init: ThinkingConfig.Builder.() -> Unit) Helper method to construct a |
Extension functions summary
FileDataPart? |
Returns the part as a |
Bitmap? |
Returns the part as a |
InlineDataPart? |
Returns the part as a |
String? |
Returns the part as a |
ImagenInlineImage |
Top-level functions
content
fun content(role: String? = "user", init: Content.Builder.() -> Unit): Content
Function to build a new Content
instances in a DSL-like manner.
Contains a collection of text, image, and binary parts.
Example usage:
content("user") {
text("Example string")
)
generationConfig
fun generationConfig(init: GenerationConfig.Builder.() -> Unit): GenerationConfig
Helper method to construct a GenerationConfig
in a DSL-like manner.
Example Usage:
generationConfig {
temperature = 0.75f
topP = 0.5f
topK = 30
candidateCount = 4
maxOutputTokens = 300
stopSequences = listOf("in conclusion", "-----", "do you need")
}
imagenGenerationConfig
@PublicPreviewAPI
fun imagenGenerationConfig(init: ImagenGenerationConfig.Builder.() -> Unit): ImagenGenerationConfig
Helper method to construct a ImagenGenerationConfig
in a DSL-like manner.
Example Usage:
imagenGenerationConfig {
negativePrompt = "People, black and white, painting"
numberOfImages = 1
aspectRatio = ImagenAspecRatio.SQUARE_1x1
imageFormat = ImagenImageFormat.png()
addWatermark = false
}
liveGenerationConfig
fun liveGenerationConfig(init: LiveGenerationConfig.Builder.() -> Unit): LiveGenerationConfig
Helper method to construct a LiveGenerationConfig
in a DSL-like manner.
Example Usage:
liveGenerationConfig {
temperature = 0.75f
topP = 0.5f
topK = 30
candidateCount = 4
maxOutputTokens = 300
...
}
thinkingConfig
fun thinkingConfig(init: ThinkingConfig.Builder.() -> Unit): ThinkingConfig
Helper method to construct a ThinkingConfig
in a DSL-like manner.
Example Usage:
thinkingConfig {
thinkingBudget = 0 // disable thinking
}
Extension functions
asFileDataOrNull
fun Part.asFileDataOrNull(): FileDataPart?
Returns the part as a FileDataPart
if it represents a file, and null otherwise
asImageOrNull
fun Part.asImageOrNull(): Bitmap?
Returns the part as a Bitmap
if it represents an image, and null otherwise
asInlineDataPartOrNull
fun Part.asInlineDataPartOrNull(): InlineDataPart?
Returns the part as a InlineDataPart
if it represents inline data, and null otherwise
asTextOrNull
fun Part.asTextOrNull(): String?
Returns the part as a String
if it represents text, and null otherwise