Required Environment Variables
At minimum, you need API keys for at least one AI provider:AI Provider Configuration
Google Gemini
Google AI Studio API key for Gemini models. How to get:- Visit Google AI Studio
- Click “Create API Key”
- Copy the generated key
gemini-2.0-flash
(default, recommended)gemini-1.5-flash
gemini-1.0-pro
Google Gemini offers generous free tier limits and excellent performance for documentation generation.
OpenAI
OpenAI API key for GPT models. Custom OpenAI API endpoint (for OpenAI-compatible services). How to get:- Visit OpenAI Platform
- Create new secret key
- Copy the key (starts with
sk-
)
gpt-4o
(default)gpt-4.1
o1
o3
o4-mini
OpenAI requires a paid account. Free tier users cannot access the API.
OpenRouter
OpenRouter API key for access to multiple model providers. How to get:- Sign up at OpenRouter
- Go to Keys section
- Create new API key
openai/gpt-4o
anthropic/claude-3.5-sonnet
deepseek/deepseek-r1
google/gemini-pro
- And 100+ more models
OpenRouter provides access to multiple AI providers through a single API, perfect for comparing models.
Azure OpenAI
Azure OpenAI service API key. Your Azure OpenAI resource endpoint URL. API version (e.g.,2024-02-15-preview
).
How to get:
- Create Azure OpenAI resource in Azure Portal
- Deploy a model (GPT-4, GPT-3.5-turbo, etc.)
- Get endpoint and API key from resource overview
- Note the API version from the deployment
AWS Bedrock
AWS access key for Bedrock access. AWS secret access key. AWS region where Bedrock models are available. Supported models:anthropic.claude-3-sonnet-20240229-v1:0
anthropic.claude-3-haiku-20240307-v1:0
anthropic.claude-3-opus-20240229-v1:0
amazon.titan-text-express-v1
Ollama (Local Models)
Ollama server URL for local AI models. Setup Ollama:qwen3:1.7b
(lightweight)llama3:8b
(balanced)qwen3:8b
(high context)
DashScope (Alibaba)
Alibaba DashScope API key for Qwen models. How to get:- Sign up at DashScope
- Create API key in console
- Add key to environment
qwen-plus
qwen-turbo
deepseek-r1
Server Configuration
Port for the FastAPI backend server. Base URL for API server (used by frontend). Environment mode (development
, production
, test
).
Example server configuration:
Security & Authentication
Authorization Mode
Enable authorization requirement for wiki generation. Secret code required when authorization mode is enabled. Usage:Authorization mode provides basic frontend protection but doesn’t secure direct API access.
Logging & Debugging
Logging verbosity level. Options:DEBUG
, INFO
, WARNING
, ERROR
, CRITICAL
Path for log file output.
Example logging configuration:
In production, use
INFO
or WARNING
level to reduce log volume.Advanced Configuration
Directory containing configuration JSON files. Redis connection URL for caching (optional). Example:Environment File Templates
Development
Production
Docker
Validation & Testing
1
Validate Environment
2
Test API Connections
3
Verify Frontend Connection
Security Best Practices
API Key Security
API Key Security
- Never commit
.env
files to version control - Use different API keys for development and production
- Regularly rotate API keys
- Monitor API usage for unexpected activity
- Use environment-specific keys when possible
Production Security
Production Security
Network Security
Network Security
- Use HTTPS in production
- Configure proper CORS settings
- Use private networks for internal components
- Enable authorization mode for public deployments
Troubleshooting
Environment Variables Not Loading
Environment Variables Not Loading
Symptoms: API key errors, default values usedSolutions:
- Verify
.env
file is in project root - Check file permissions (readable by application)
- Ensure no syntax errors in
.env
file - Restart application after changes
API Key Validation Errors
API Key Validation Errors
Symptoms: “Invalid API key” errorsSolutions:
- Test API keys with provider’s documentation
- Check for extra spaces or characters
- Verify key has correct permissions/scopes
- Confirm key hasn’t expired or been revoked
Port Conflicts
Port Conflicts
Symptoms: “Port already in use” errorsSolutions:
- Change PORT environment variable
- Kill existing processes on the port
- Use Docker with port mapping
- Configure reverse proxy
Next Steps
Model Provider Setup
Configure specific AI model providers and their settings
Production Deployment
Deploy DeepWiki with production-ready configuration
Configuration Files
Learn about JSON configuration files for advanced customization
Security Guide
Implement security best practices for production deployments