OpenAI Plugin
The genkit-plugin-compat-oai
package includes a pre-configured plugin for official OpenAI models.
Installation
Section titled “Installation”pip install genkit-plugin-compat-oai
Configuration
Section titled “Configuration”To use this plugin, import OpenAI
and openai_model
and specify it when you initialize Genkit:
from genkit.ai import Genkitfrom genkit.plugins.compat_oai import OpenAI, openai_model
ai = Genkit(plugins=[OpenAI()], model=openai_model('gpt-4o'))
The plugin requires an API key for the OpenAI API. You can get one from the OpenAI Platform.
Configure the plugin to use your API key by doing one of the following:
-
Set the
OPENAI_API_KEY
environment variable to your API key. -
Specify the API key when you initialize the plugin:
OpenAI(api_key=your_key)However, don’t embed your API key directly in code!
The plugin provides helpers to reference supported models.
Chat Models
Section titled “Chat Models”You can reference chat models like gpt-4o
and gpt-4-turbo
using the openai_model()
helper.
import structlogfrom pydantic import BaseModel, Field
from genkit.ai import Genkitfrom genkit.plugins.compat_oai import OpenAI, openai_model
logger = structlog.get_logger(__name__)
ai = Genkit(plugins=[OpenAI()])
@ai.flow()async def say_hi(name: str) -> str: """Say hi to a name.
Args: name: The name to say hi to.
Returns: The response from the OpenAI API. """ response = await ai.generate( model=openai_model('gpt-4'), prompt=f'hi {name}', ) return response.message.content[0].root.text
You can also pass model-specific configuration:
response = await ai.generate( model=openai_model('gpt-4'), config={'temperature': 1}, prompt=f'hi {name}',)
Tool Calling
Section titled “Tool Calling”You can define and use tools with OpenAI models.
import httpxfrom decimal import Decimalfrom pydantic import BaseModel
from genkit.ai import Genkitfrom genkit.plugins.compat_oai import OpenAI, openai_model
ai = Genkit(plugins=[OpenAI()])
class WeatherRequest(BaseModel): """Weather request."""
latitude: Decimal longitude: Decimal
@ai.tool(description='Get current temperature for provided coordinates in celsius')def get_weather_tool(coordinates: WeatherRequest) -> float: """Get the current temperature for provided coordinates in celsius.
Args: coordinates: The coordinates to get the weather for.
Returns: The current temperature for the provided coordinates. """ url = ( f'https://api.open-meteo.com/v1/forecast?' f'latitude={coordinates.latitude}&longitude={coordinates.longitude}' f'¤t=temperature_2m' ) with httpx.Client() as client: response = client.get(url) data = response.json() return float(data['current']['temperature_2m'])
@ai.flow()async def get_weather_flow(location: str) -> str: """Get the weather for a location.
Args: location: The location to get the weather for.
Returns: The weather for the location. """ response = await ai.generate( model=openai_model('gpt-4o-mini'), prompt=f"What's the weather like in {location} today?", tools=['get_weather_tool'], ) # The response will contain the tool output if the model decided to call it. return response.message.content[0].root.text
Streaming
Section titled “Streaming”The plugin supports streaming responses.
@ai.flow()async def say_hi_stream(name: str) -> str: """Say hi to a name and stream the response.
Args: name: The name to say hi to.
Returns: The response from the OpenAI API. """ stream, _ = ai.generate_stream( model=openai_model('gpt-4'), prompt=f'hi {name}', ) result = '' async for data in stream: for part in data.content: result += part.root.text return result
The @genkit-ai/compat-oai
package includes a pre-configured plugin for official OpenAI models.
Installation
Section titled “Installation”npm install @genkit-ai/compat-oai
Configuration
Section titled “Configuration”To use this plugin, import openAI
and specify it when you initialize Genkit:
import { genkit } from 'genkit';import { openAI } from '@genkit-ai/compat-oai/openai';
export const ai = genkit({ plugins: [openAI()],});
The plugin requires an API key for the OpenAI API. You can get one from the OpenAI Platform.
Configure the plugin to use your API key by doing one of the following:
-
Set the
OPENAI_API_KEY
environment variable to your API key. -
Specify the API key when you initialize the plugin:
openAI({ apiKey: yourKey });However, don’t embed your API key directly in code! Use this feature only in conjunction with a service like Google Cloud Secret Manager or similar.
The plugin provides helpers to reference supported models and embedders.
Chat Models
Section titled “Chat Models”You can reference chat models like gpt-4o
and gpt-4-turbo
using the openAI.model()
helper.
import { genkit, z } from 'genkit';import { openAI } from '@genkit-ai/compat-oai/openai';
const ai = genkit({ plugins: [openAI()],});
export const jokeFlow = ai.defineFlow( { name: 'jokeFlow', inputSchema: z.object({ subject: z.string() }), outputSchema: z.object({ joke: z.string() }), }, async ({ subject }) => { const llmResponse = await ai.generate({ prompt: `tell me a joke about ${subject}`, model: openAI.model('gpt-4o'), }); return { joke: llmResponse.text }; },);
You can also pass model-specific configuration:
const llmResponse = await ai.generate({ prompt: `tell me a joke about ${subject}`, model: openAI.model('gpt-4o'), config: { temperature: 0.7, },});
Image Generation Models
Section titled “Image Generation Models”The plugin supports image generation models like DALL-E 3.
import { genkit } from 'genkit';import { openAI } from '@genkit-ai/compat-oai/openai';
const ai = genkit({ plugins: [openAI()],});
// Reference an image generation modelconst dalle3 = openAI.model('dall-e-3');
// Use it to generate an imageconst imageResponse = await ai.generate({ model: dalle3, prompt: 'A photorealistic image of a cat programming a computer.', config: { size: '1024x1024', style: 'vivid', },});
const imageUrl = imageResponse.media()?.url;
Text Embedding Models
Section titled “Text Embedding Models”You can use text embedding models to create vector embeddings from text.
import { genkit, z } from 'genkit';import { openAI } from '@genkit-ai/compat-oai/openai';
const ai = genkit({ plugins: [openAI()],});
export const embedFlow = ai.defineFlow( { name: 'embedFlow', inputSchema: z.object({ text: z.string() }), outputSchema: z.object({ embedding: z.string() }), }, async ({ text }) => { const embedding = await ai.embed({ embedder: openAI.embedder('text-embedding-ada-002'), content: text, });
return { embedding: JSON.stringify(embedding) }; },);
Audio Transcription and Speech Models
Section titled “Audio Transcription and Speech Models”The OpenAI plugin also supports audio models for transcription (speech-to-text) and speech generation (text-to-speech).
Transcription (Speech-to-Text)
Section titled “Transcription (Speech-to-Text)”Use models like whisper-1
to transcribe audio files.
import { genkit } from 'genkit';import { openAI } from '@genkit-ai/compat-oai/openai';import * as fs from 'fs';
const ai = genkit({ plugins: [openAI()],});
const whisper = openAI.model('whisper-1');const audioFile = fs.readFileSync('path/to/your/audio.mp3');
const transcription = await ai.generate({ model: whisper, prompt: [ { media: { contentType: 'audio/mp3', url: `data:audio/mp3;base64,${audioFile.toString('base64')}`, }, }, ],});
console.log(transcription.text());
Speech Generation (Text-to-Speech)
Section titled “Speech Generation (Text-to-Speech)”Use models like tts-1
to generate speech from text.
import { genkit } from 'genkit';import { openAI } from '@genkit-ai/compat-oai/openai';import * as fs from 'fs';
const ai = genkit({ plugins: [openAI()],});
const tts = openAI.model('tts-1');const speechResponse = await ai.generate({ model: tts, prompt: 'Hello, world! This is a test of text-to-speech.', config: { voice: 'alloy', },});
const audioData = speechResponse.media();if (audioData) { fs.writeFileSync('output.mp3', Buffer.from(audioData.url.split(',')[1], 'base64'));}
Advanced usage
Section titled “Advanced usage”Passthrough configuration
Section titled “Passthrough configuration”You can pass configuration options that are not defined in the plugin’s custom configuration schema. This permits you to access new models and features without having to update your Genkit version.
import { genkit } from 'genkit';import { openAI } from '@genkit-ai/compat-oai/openai';
const ai = genkit({ plugins: [openAI()],});
const llmResponse = await ai.generate({ prompt: `Tell me a cool story`, model: openAI.model('gpt-4-new'), // hypothetical new model config: { seed: 123, new_feature_parameter: ... // hypothetical config needed for new model },});
Genkit passes this config as-is to the OpenAI API giving you access to the new model features. Note that the field name and types are not validated by Genkit and should match the OpenAI API specification to work.
Web-search built-in tool
Section titled “Web-search built-in tool”Some OpenAI models support web search. You can enable it in the config
block:
import { genkit } from 'genkit';import { openAI } from '@genkit-ai/compat-oai/openai';
const ai = genkit({ plugins: [openAI()],});
const llmResponse = await ai.generate({ prompt: `What was a positive news story from today?`, model: openAI.model('gpt-4o-search-preview'), config: { web_search_options: {}, },});
The OpenAI plugin provides access to OpenAI models.
Configuration
Section titled “Configuration”import "github.com/firebase/genkit/go/plugins/compat_oai/openai"
g := genkit.Init(context.Background(), genkit.WithPlugins(&openai.OpenAI{ APIKey: "YOUR_OPENAI_API_KEY", // or set OPENAI_API_KEY env var}))
Supported Models
Section titled “Supported Models”Latest Models
Section titled “Latest Models”- gpt-4.1 - Latest GPT-4.1 with multimodal support
- gpt-4.1-mini - Faster, cost-effective GPT-4.1 variant
- gpt-4.1-nano - Ultra-efficient GPT-4.1 variant
- gpt-4.5-preview - Preview of GPT-4.5 with advanced capabilities
Production Models
Section titled “Production Models”- gpt-4o - Advanced GPT-4 with vision and tool support
- gpt-4o-mini - Fast and cost-effective GPT-4o variant
- gpt-4-turbo - High-performance GPT-4 with large context window
Reasoning Models
Section titled “Reasoning Models”- o3-mini - Latest compact reasoning model
- o1 - Advanced reasoning model for complex problems
- o1-mini - Compact reasoning model
- o1-preview - Preview reasoning model
Legacy Models
Section titled “Legacy Models”- gpt-4 - Original GPT-4 model
- gpt-3.5-turbo - Fast and efficient language model
Embedding Models
Section titled “Embedding Models”- text-embedding-3-large - Most capable embedding model
- text-embedding-3-small - Fast and efficient embedding model
- text-embedding-ada-002 - Legacy embedding model
Usage Example
Section titled “Usage Example”import ( "github.com/firebase/genkit/go/plugins/compat_oai/openai" "github.com/firebase/genkit/go/plugins/compat_oai")
// Initialize Genkit with the OpenAI pluging := genkit.Init(ctx, genkit.WithPlugins(&openai.OpenAI{APIKey: "YOUR_API_KEY"}))
// Use GPT-4o for general tasksmodel := oai.Model(g, "gpt-4o")resp, err := genkit.Generate(ctx, g, ai.WithModel(model), ai.WithPrompt("Explain quantum computing."),)
// Use embeddingsembedder := oai.Embedder(g, "text-embedding-3-large")embeds, err := ai.Embed(ctx, embedder, ai.WithDocs("Hello, world!"))
Advanced Features
Section titled “Advanced Features”Tool Calling
Section titled “Tool Calling”OpenAI models support tool calling:
// Define a toolweatherTool := genkit.DefineTool(g, "get_weather", "Get current weather", func(ctx *ai.ToolContext, input struct{City string}) (string, error) { return fmt.Sprintf("It's sunny in %s", input.City), nil })
// Use with GPT modelsmodel := oai.Model(g, "gpt-4o")resp, err := genkit.Generate(ctx, g, ai.WithModel(model), ai.WithPrompt("What's the weather like in San Francisco?"), ai.WithTools(weatherTool),)
Multimodal Support
Section titled “Multimodal Support”OpenAI models support vision capabilities:
// Works with GPT-4o modelsresp, err := genkit.Generate(ctx, g, ai.WithModel(model), ai.WithMessages( ai.NewUserMessage( ai.NewTextPart("What do you see in this image?"), ai.NewMediaPart("image/jpeg", imageData), ), ),)
Streaming
Section titled “Streaming”OpenAI models support streaming responses:
resp, err := genkit.Generate(ctx, g, ai.WithModel(model), ai.WithPrompt("Write a long explanation."), ai.WithStreaming(func(ctx context.Context, chunk *ai.ModelResponseChunk) error { for _, content := range chunk.Content { fmt.Print(content.Text) } return nil }),)