# Choosing a Model
How to select the best AI model for your Chipp app
Chipp supports multiple AI models from OpenAI, Anthropic, and Google. Each has different strengths, speeds, and costs. This guide helps you choose the right one.
## Quick Recommendations
| Use Case | Recommended Model |
|----------|-------------------|
| General purpose | GPT-5 or Claude Sonnet 4 |
| Image analysis | Gemini 2.5 Pro, GPT-5, or Claude Sonnet 4 |
| Complex reasoning | o3, Claude Opus 4, or Claude Sonnet 4.5 |
| Fast responses | Gemini 2.5 Flash, GPT-5 Mini, or Claude 3.5 Haiku |
| Long documents | Gemini 2.5 Pro or GPT-4.1 (1M token context) |
| Cost-sensitive | Gemini 2.5 Flash-Lite, Gemini 2.5 Flash, or GPT-5 Nano |
## Estimate Your Costs
Use our calculator to estimate monthly AI costs based on your expected usage:
## Featured Models
### Best for General Purpose
These models excel at a wide range of tasks including writing, coding, analysis, and conversation.
### Best for Speed
When response time matters most, these models deliver near-instant results without sacrificing quality.
### Best for Reasoning
For complex problem-solving, multi-step analysis, and tasks requiring deep thinking.
### Best Value for Long Documents
Process entire codebases, legal contracts, or book-length content with massive context windows.
## Model Deep Dives
### OpenAI GPT-5
### Claude Sonnet 4
### Gemini 2.5 Pro
### OpenAI o3
### Claude 3.5 Haiku
### Gemini 2.5 Flash
## Cost-Effective Options
If you're optimizing for cost, these models offer excellent value:
## Key Considerations
### Vision Support
If your app analyzes images, choose a model with native vision support. Models without vision use a fallback that may be less accurate.
**Vision-capable models:**
- All GPT-4.1 and GPT-5 variants (not o-series reasoning models)
- All Claude models
- All Gemini models
**No vision support:**
- OpenAI o-series (o1, o3, o4-mini, etc.)
### Response Speed
Speed matters for user experience. Faster models keep conversations flowing naturally.
**Fastest:** GPT-5 Nano, Claude 3.5 Haiku, Gemini Flash Lite
**Medium:** GPT-5, Claude Sonnet 4, Gemini 2.5 Flash
**Slower:** Claude Opus 4, o1, o3 (reasoning takes time)
### Context Window
For processing long documents, choose models with large context windows:
**1M tokens:** GPT-4.1 variants, all Gemini models
**400k tokens:** GPT-5 variants
**200k tokens:** All Claude models, OpenAI o-series
### Reasoning Quality
For complex tasks requiring multi-step reasoning:
**Best reasoning:** o3 Pro, o1 Pro, Claude Opus 4
**Very good:** GPT-5, Claude Sonnet 4.5, o3, o1
**Good:** GPT-5 Mini, Claude Sonnet 4, Gemini 2.5 Pro
## Changing Your Model
1. Go to your app in the Chipp dashboard
2. Navigate to **Build** > **Configure**
3. Under **Model**, select your preferred model
4. Click **Save**
Changes take effect immediately for new conversations.
## Testing Different Models
Not sure which model works best? Try these approaches:
1. **A/B testing**: Create two versions of your app with different models and compare user feedback
2. **Specific prompts**: Test your most common use cases with different models to see quality differences
3. **Speed vs. quality**: Start with a fast model, then upgrade if users need better responses
## Using with MCP
You can manage AI models programmatically using the Chipp MCP Server, including AI-powered model recommendations.
### Available Tools
| Tool | Description |
|------|-------------|
| `get_model` | Get current model for an app |
| `update_model` | Change the AI model for an app |
| `list_available_models` | List all available AI models |
| `recommend_model` | Get AI-powered model recommendations |
### Example Prompts
```
"What model is my FAQ bot using?"
"Switch my support bot to GPT-5"
"Which model would you recommend for my customer service app?"
"List all available models with their pricing"
```
### Workflow: AI-Powered Model Selection
```
You: Recommend a model for my customer support bot that needs to be fast
and handle simple FAQ questions
AI: [Calls recommend_model with priority="speed"]
Based on your use case, I recommend:
1. **Gemini 2.5 Flash** (Best choice)
- Very fast responses (~1.5s)
- Cost: $0.25/M tokens average
- Great for FAQ and simple queries
2. **GPT-5 Nano** (Budget option)
- Fastest response time
- Cost: $0.08/M tokens average
- Good for high-volume, simple tasks
Want me to update your app to use Gemini 2.5 Flash?
You: Yes, update it
AI: [Calls update_model]
Updated FAQ Bot to use Gemini 2.5 Flash. Changes take effect
for new conversations.
```
MCP server access requires a Pro plan or higher. See the [MCP Setup Guide](/docs/guides/mcp/setup) to get started.