DeepWiki-Open uses environment variables to configure AI providers, server settings, authentication, and advanced features. This guide covers all available environment variables and their usage.Documentation Index
Fetch the complete documentation index at: https://asyncfunc.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Required Environment Variables
At minimum, you need API keys for at least one AI provider:AI Provider Configuration
Google Gemini
Google AI Studio API key for Gemini models. How to get:- Visit Google AI Studio
- Click “Create API Key”
- Copy the generated key
gemini-2.0-flash(default, recommended)gemini-1.5-flashgemini-1.0-pro
OpenAI
OpenAI API key for GPT models. Custom OpenAI API endpoint (for OpenAI-compatible services). How to get:- Visit OpenAI Platform
- Create new secret key
- Copy the key (starts with
sk-)
gpt-4o(default)gpt-4.1o1o3o4-mini
OpenRouter
OpenRouter API key for access to multiple model providers. How to get:- Sign up at OpenRouter
- Go to Keys section
- Create new API key
openai/gpt-4oanthropic/claude-3.5-sonnetdeepseek/deepseek-r1google/gemini-pro- And 100+ more models
OpenRouter provides access to multiple AI providers through a single API, perfect for comparing models.
Azure OpenAI
Azure OpenAI service API key. Your Azure OpenAI resource endpoint URL. API version (e.g.,2024-02-15-preview).
How to get:
- Create Azure OpenAI resource in Azure Portal
- Deploy a model (GPT-4, GPT-3.5-turbo, etc.)
- Get endpoint and API key from resource overview
- Note the API version from the deployment
AWS Bedrock
AWS access key for Bedrock access. AWS secret access key. AWS region where Bedrock models are available. Supported models:anthropic.claude-3-sonnet-20240229-v1:0anthropic.claude-3-haiku-20240307-v1:0anthropic.claude-3-opus-20240229-v1:0amazon.titan-text-express-v1
Ollama (Local Models)
Ollama server URL for local AI models. Setup Ollama:- macOS
- Linux
- Docker
qwen3:1.7b(lightweight)llama3:8b(balanced)qwen3:8b(high context)
DashScope (Alibaba)
Alibaba DashScope API key for Qwen models. How to get:- Sign up at DashScope
- Create API key in console
- Add key to environment
qwen-plusqwen-turbodeepseek-r1
Server Configuration
Port for the FastAPI backend server. Base URL for API server (used by frontend). Environment mode (development, production, test).
Example server configuration:
Security & Authentication
Authorization Mode
Enable authorization requirement for wiki generation. Secret code required when authorization mode is enabled. Usage:Logging & Debugging
Logging verbosity level. Options:DEBUG, INFO, WARNING, ERROR, CRITICAL
Path for log file output.
Example logging configuration:
In production, use
INFO or WARNING level to reduce log volume.Advanced Configuration
Directory containing configuration JSON files. Redis connection URL for caching (optional). Example:Environment File Templates
Development
Production
Docker
Validation & Testing
Security Best Practices
API Key Security
API Key Security
- Never commit
.envfiles to version control - Use different API keys for development and production
- Regularly rotate API keys
- Monitor API usage for unexpected activity
- Use environment-specific keys when possible
Production Security
Production Security
Network Security
Network Security
- Use HTTPS in production
- Configure proper CORS settings
- Use private networks for internal components
- Enable authorization mode for public deployments
Troubleshooting
Environment Variables Not Loading
Environment Variables Not Loading
Symptoms: API key errors, default values usedSolutions:
- Verify
.envfile is in project root - Check file permissions (readable by application)
- Ensure no syntax errors in
.envfile - Restart application after changes
API Key Validation Errors
API Key Validation Errors
Symptoms: “Invalid API key” errorsSolutions:
- Test API keys with provider’s documentation
- Check for extra spaces or characters
- Verify key has correct permissions/scopes
- Confirm key hasn’t expired or been revoked
Port Conflicts
Port Conflicts
Symptoms: “Port already in use” errorsSolutions:
- Change PORT environment variable
- Kill existing processes on the port
- Use Docker with port mapping
- Configure reverse proxy
Next Steps
Model Provider Setup
Configure specific AI model providers and their settings
Production Deployment
Deploy DeepWiki with production-ready configuration
Configuration Files
Learn about JSON configuration files for advanced customization
Security Guide
Implement security best practices for production deployments