Documentation
¶
Overview ¶
Package googleexecutor provides a generic Google AI (Gemini) executor for AI agents.
This package implements a reusable pattern for Google AI-based agents, handling:
- Prompt template rendering
- Chat session management
- Tool/function calling
- Response parsing and extraction
- Trace management for evaluation
Architecture ¶
The executor follows a generic design pattern where Request and Response types are parameterized, allowing different agents to reuse the same core logic:
type MyRequest struct {
Input string
}
type MyResponse struct {
Output string
}
executor, err := googleexecutor.New[*MyRequest, *MyResponse](
client,
promptTemplate,
googleexecutor.WithModel[*MyRequest, *MyResponse]("gemini-2.5-flash"),
)
Tool Support ¶
The executor supports Google AI function calling through the Metadata type:
tools := map[string]googletool.Metadata[*MyResponse]{
"my_tool": {
Definition: &genai.FunctionDeclaration{
Name: "my_tool",
Description: "Tool description",
Parameters: &genai.Schema{...},
},
Handler: func(ctx context.Context, call *genai.FunctionCall, trace *agenttrace.Trace[*MyResponse]) *genai.FunctionResponse {
// Tool implementation
},
},
}
response, err := executor.Execute(ctx, request, tools)
Options ¶
The executor supports various configuration options:
- WithModel: Set the Gemini model to use
- WithTemperature: Control response randomness (0.0-2.0)
- WithMaxOutputTokens: Set maximum response length
- WithSystemInstructions: Provide system-level instructions
- WithResponseMIMEType: Set response format (e.g., "application/json")
- WithResponseSchema: Define structured output schema
- WithThinking: Enable thinking mode with a token budget
Thinking Mode ¶
Thinking mode allows Gemini to show its internal reasoning process. When enabled, thought blocks are captured in the trace:
executor, err := googleexecutor.New[*Request, *Response](
client,
prompt,
googleexecutor.WithThinking[*Request, *Response](2048), // 2048 token budget for thinking
)
Reasoning blocks are stored in trace.Reasoning as []agenttrace.ReasoningContent, where each block contains:
- Thinking: the reasoning text
Integration with Evaluation ¶
The executor automatically integrates with the evals package for tracing:
- Creates traces for each execution
- Records tool calls and responses
- Tracks bad tool calls for debugging
- Provides complete execution history
Error Handling ¶
The executor provides comprehensive error handling:
- Template rendering errors
- Chat creation failures
- Malformed function calls (with automatic retry)
- Response parsing errors
- Tool execution errors
Usage Example ¶
// Create client
client, err := genai.NewClient(ctx, &genai.ClientConfig{
Project: projectID,
Location: region,
Backend: genai.BackendVertexAI,
})
// Parse template
tmpl := template.Must(template.New("prompt").Parse("Analyze: {{.Input}}"))
// Create executor
executor, err := googleexecutor.New[*Request, *Response](
client,
tmpl,
googleexecutor.WithModel[*Request, *Response]("gemini-2.5-flash"),
googleexecutor.WithTemperature[*Request, *Response](0.1),
googleexecutor.WithResponseMIMEType[*Request, *Response]("application/json"),
)
// Execute
response, err := executor.Execute(ctx, request, nil)
Performance Considerations ¶
- Templates are executed for each request (consider pre-rendering if static)
- Chat sessions are created per execution (not reused)
- Tool responses are sent synchronously
- Large response schemas may impact latency
Thread Safety ¶
The executor is safe for concurrent use. Each Execute call creates its own chat session and maintains independent state.
Example ¶
Example demonstrates basic usage of the Google AI executor
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"chainguard.dev/driftlessaf/agents/executor/googleexecutor"
"chainguard.dev/driftlessaf/agents/promptbuilder"
"google.golang.org/genai"
)
// MathRequest is a sample request type for math problems
type MathRequest struct {
Problem string
}
// Bind implements promptbuilder.Bindable
func (r *MathRequest) Bind(p *promptbuilder.Prompt) (*promptbuilder.Prompt, error) {
return p.BindXML("problem", struct {
XMLName struct{} `xml:"problem"`
Content string `xml:",chardata"`
}{
Content: r.Problem,
})
}
// MathResponse is a sample response type for math solutions
type MathResponse struct {
Answer json.Number `json:"answer"`
Reasoning string `json:"reasoning"`
}
func main() {
ctx := context.Background()
// Create Gemini client
client, err := genai.NewClient(ctx, &genai.ClientConfig{
Project: "my-project",
Location: "us-central1",
Backend: genai.BackendVertexAI,
})
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
// Create prompt template
prompt, err := promptbuilder.NewPrompt(`You are a math assistant.
Problem: {{problem}}
Solve this and respond in JSON format:
{
"answer": "the numerical answer",
"reasoning": "brief explanation"
}`)
if err != nil {
log.Fatalf("Failed to create prompt: %v", err)
}
// Create executor with default settings
executor, err := googleexecutor.New[*MathRequest, *MathResponse](
client,
prompt,
)
if err != nil {
log.Fatalf("Failed to create executor: %v", err)
}
// Execute a request
request := &MathRequest{Problem: "What is 15 + 27?"}
response, err := executor.Execute(ctx, request, nil)
if err != nil {
log.Fatalf("Execute failed: %v", err)
}
fmt.Printf("Answer: %s\n", response.Answer)
}
Output:
Example (WithOptions) ¶
Example_withOptions demonstrates using configuration options
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"chainguard.dev/driftlessaf/agents/executor/googleexecutor"
"chainguard.dev/driftlessaf/agents/promptbuilder"
"google.golang.org/genai"
)
// MathRequest is a sample request type for math problems
type MathRequest struct {
Problem string
}
// Bind implements promptbuilder.Bindable
func (r *MathRequest) Bind(p *promptbuilder.Prompt) (*promptbuilder.Prompt, error) {
return p.BindXML("problem", struct {
XMLName struct{} `xml:"problem"`
Content string `xml:",chardata"`
}{
Content: r.Problem,
})
}
// MathResponse is a sample response type for math solutions
type MathResponse struct {
Answer json.Number `json:"answer"`
Reasoning string `json:"reasoning"`
}
func main() {
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
Project: "my-project",
Location: "us-central1",
Backend: genai.BackendVertexAI,
})
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
prompt, err := promptbuilder.NewPrompt(`Solve: {{problem}}`)
if err != nil {
log.Fatalf("Failed to create prompt: %v", err)
}
// Create executor with custom options
executor, err := googleexecutor.New[*MathRequest, *MathResponse](
client,
prompt,
googleexecutor.WithModel[*MathRequest, *MathResponse]("gemini-2.5-flash"),
googleexecutor.WithTemperature[*MathRequest, *MathResponse](0.1),
googleexecutor.WithMaxOutputTokens[*MathRequest, *MathResponse](4096),
googleexecutor.WithResponseMIMEType[*MathRequest, *MathResponse]("application/json"),
)
if err != nil {
log.Fatalf("Failed to create executor: %v", err)
}
request := &MathRequest{Problem: "What is 42 * 13?"}
response, err := executor.Execute(ctx, request, nil)
if err != nil {
log.Fatalf("Execute failed: %v", err)
}
fmt.Printf("Answer: %s\n", response.Answer)
}
Output:
Example (WithSystemInstructions) ¶
Example_withSystemInstructions demonstrates using system instructions
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"chainguard.dev/driftlessaf/agents/executor/googleexecutor"
"chainguard.dev/driftlessaf/agents/promptbuilder"
"google.golang.org/genai"
)
// MathRequest is a sample request type for math problems
type MathRequest struct {
Problem string
}
// Bind implements promptbuilder.Bindable
func (r *MathRequest) Bind(p *promptbuilder.Prompt) (*promptbuilder.Prompt, error) {
return p.BindXML("problem", struct {
XMLName struct{} `xml:"problem"`
Content string `xml:",chardata"`
}{
Content: r.Problem,
})
}
// MathResponse is a sample response type for math solutions
type MathResponse struct {
Answer json.Number `json:"answer"`
Reasoning string `json:"reasoning"`
}
func main() {
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
Project: "my-project",
Location: "us-central1",
Backend: genai.BackendVertexAI,
})
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
// Create system instructions
systemPrompt, err := promptbuilder.NewPrompt(`You are an expert mathematician.
Always show your work step by step.
Provide clear, concise explanations.`)
if err != nil {
log.Fatalf("Failed to create system prompt: %v", err)
}
prompt, err := promptbuilder.NewPrompt(`Problem: {{problem}}`)
if err != nil {
log.Fatalf("Failed to create prompt: %v", err)
}
// Create executor with system instructions
executor, err := googleexecutor.New[*MathRequest, *MathResponse](
client,
prompt,
googleexecutor.WithSystemInstructions[*MathRequest, *MathResponse](systemPrompt),
googleexecutor.WithResponseMIMEType[*MathRequest, *MathResponse]("application/json"),
)
if err != nil {
log.Fatalf("Failed to create executor: %v", err)
}
request := &MathRequest{Problem: "What is 25% of 80?"}
response, err := executor.Execute(ctx, request, nil)
if err != nil {
log.Fatalf("Execute failed: %v", err)
}
fmt.Printf("Answer: %s\n", response.Answer)
}
Output:
Example (WithThinking) ¶
Example_withThinking demonstrates enabling thinking mode
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"chainguard.dev/driftlessaf/agents/executor/googleexecutor"
"chainguard.dev/driftlessaf/agents/promptbuilder"
"google.golang.org/genai"
)
// MathRequest is a sample request type for math problems
type MathRequest struct {
Problem string
}
// Bind implements promptbuilder.Bindable
func (r *MathRequest) Bind(p *promptbuilder.Prompt) (*promptbuilder.Prompt, error) {
return p.BindXML("problem", struct {
XMLName struct{} `xml:"problem"`
Content string `xml:",chardata"`
}{
Content: r.Problem,
})
}
// MathResponse is a sample response type for math solutions
type MathResponse struct {
Answer json.Number `json:"answer"`
Reasoning string `json:"reasoning"`
}
func main() {
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
Project: "my-project",
Location: "us-central1",
Backend: genai.BackendVertexAI,
})
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
prompt, err := promptbuilder.NewPrompt(`Solve this complex problem: {{problem}}`)
if err != nil {
log.Fatalf("Failed to create prompt: %v", err)
}
// Enable thinking mode with a 2048 token budget
executor, err := googleexecutor.New[*MathRequest, *MathResponse](
client,
prompt,
googleexecutor.WithModel[*MathRequest, *MathResponse]("gemini-2.5-flash"),
googleexecutor.WithMaxOutputTokens[*MathRequest, *MathResponse](8192),
googleexecutor.WithThinking[*MathRequest, *MathResponse](2048),
googleexecutor.WithResponseMIMEType[*MathRequest, *MathResponse]("application/json"),
)
if err != nil {
log.Fatalf("Failed to create executor: %v", err)
}
request := &MathRequest{Problem: "What is the square root of 144?"}
response, err := executor.Execute(ctx, request, nil)
if err != nil {
log.Fatalf("Execute failed: %v", err)
}
fmt.Printf("Answer: %s\n", response.Answer)
}
Output:
Index ¶
- Constants
- type Interface
- type Option
- func WithMaxOutputTokens[Request promptbuilder.Bindable, Response any](tokens int32) Option[Request, Response]
- func WithMaxTurns[Request promptbuilder.Bindable, Response any](turns int) Option[Request, Response]
- func WithModel[Request promptbuilder.Bindable, Response any](model string) Option[Request, Response]
- func WithResourceLabels[Request promptbuilder.Bindable, Response any](labels map[string]string) Option[Request, Response]
- func WithResponseMIMEType[Request promptbuilder.Bindable, Response any](mimeType string) Option[Request, Response]
- func WithResponseSchema[Request promptbuilder.Bindable, Response any](schema *genai.Schema) Option[Request, Response]
- func WithRetryConfig[Request promptbuilder.Bindable, Response any](cfg retry.RetryConfig) Option[Request, Response]
- func WithSubmitResultProvider[Request promptbuilder.Bindable, Response any](provider SubmitResultProvider[Response]) Option[Request, Response]
- func WithSystemInstructions[Request promptbuilder.Bindable, Response any](prompt *promptbuilder.Prompt) Option[Request, Response]
- func WithTemperature[Request promptbuilder.Bindable, Response any](temperature float32) Option[Request, Response]
- func WithThinking[Request promptbuilder.Bindable, Response any](budgetTokens int32) Option[Request, Response]
- type SubmitResultProvider
Examples ¶
Constants ¶
const DefaultMaxTurns = 50
DefaultMaxTurns is the default maximum number of conversation turns (LLM round-trips) before the executor aborts. Each turn corresponds to one Gemini API call. This prevents runaway loops when the model keeps calling tools without converging on a result.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Interface ¶
type Interface[Request promptbuilder.Bindable, Response any] interface { // Execute runs the Google AI conversation with the given request and tools // Optional seed tool calls can be provided - these will be executed and their results prepended to the conversation Execute(ctx context.Context, request Request, tools map[string]googletool.Metadata[Response], seedToolCalls ...*genai.FunctionCall) (Response, error) }
Interface defines the contract for Google AI executors
type Option ¶
type Option[Request promptbuilder.Bindable, Response any] func(*executor[Request, Response]) error
Option is a functional option for configuring an executor
func WithMaxOutputTokens ¶
func WithMaxOutputTokens[Request promptbuilder.Bindable, Response any](tokens int32) Option[Request, Response]
WithMaxOutputTokens sets the maximum output tokens for generation
func WithMaxTurns ¶ added in v0.2.0
func WithMaxTurns[Request promptbuilder.Bindable, Response any](turns int) Option[Request, Response]
WithMaxTurns sets the maximum number of conversation turns (LLM round-trips) before the executor aborts. This prevents runaway loops where the model keeps calling tools without converging on a result. Default is DefaultMaxTurns (50).
func WithModel ¶
func WithModel[Request promptbuilder.Bindable, Response any](model string) Option[Request, Response]
WithModel sets the model to use for generation
func WithResourceLabels ¶
func WithResourceLabels[Request promptbuilder.Bindable, Response any](labels map[string]string) Option[Request, Response]
WithResourceLabels sets labels that are sent with each Vertex AI API request. Automatically includes default labels from environment variables:
- service_name: from K_SERVICE (defaults to "unknown")
- product: from CHAINGUARD_PRODUCT (defaults to "unknown")
- team: from CHAINGUARD_TEAM (defaults to "unknown")
Custom labels passed to this function will override defaults if they use the same keys.
func WithResponseMIMEType ¶
func WithResponseMIMEType[Request promptbuilder.Bindable, Response any](mimeType string) Option[Request, Response]
WithResponseMIMEType sets the response MIME type (e.g., "application/json")
func WithResponseSchema ¶
func WithResponseSchema[Request promptbuilder.Bindable, Response any](schema *genai.Schema) Option[Request, Response]
WithResponseSchema sets the response schema for structured output
func WithRetryConfig ¶
func WithRetryConfig[Request promptbuilder.Bindable, Response any](cfg retry.RetryConfig) Option[Request, Response]
WithRetryConfig sets the retry configuration for handling transient Vertex AI errors. This is particularly useful for handling 429 RESOURCE_EXHAUSTED errors that occur when quota limits are hit. If not set, a default configuration is used.
func WithSubmitResultProvider ¶
func WithSubmitResultProvider[Request promptbuilder.Bindable, Response any](provider SubmitResultProvider[Response]) Option[Request, Response]
WithSubmitResultProvider registers the submit_result tool using the supplied provider. This is opt-in - agents must explicitly call this to enable submit_result.
func WithSystemInstructions ¶
func WithSystemInstructions[Request promptbuilder.Bindable, Response any](prompt *promptbuilder.Prompt) Option[Request, Response]
WithSystemInstructions sets the system instructions for the model
func WithTemperature ¶
func WithTemperature[Request promptbuilder.Bindable, Response any](temperature float32) Option[Request, Response]
WithTemperature sets the temperature for generation Gemini models support temperature values from 0.0 to 2.0 This is a wider range than Claude (0.0-1.0) allowing for more creative outputs Lower values (e.g., 0.1) produce more deterministic outputs Higher values (e.g., 1.5-2.0) produce very creative/random outputs
func WithThinking ¶
func WithThinking[Request promptbuilder.Bindable, Response any](budgetTokens int32) Option[Request, Response]
WithThinking enables thinking mode with the specified token budget The budget parameter sets the maximum tokens the model can use for reasoning Special value -1 enables dynamic thinking where the model adjusts based on complexity See https://ai.google.dev/gemini-api/docs/thinking Must be less than max_output_tokens to leave room for actual output
type SubmitResultProvider ¶
type SubmitResultProvider[Response any] func() (googletool.Metadata[Response], error)
SubmitResultProvider constructs tool metadata for submit_result.