DeepWiki-Open uses environment variables to configure AI providers, server settings, authentication, and advanced features. This guide covers all available environment variables and their usage.

Required Environment Variables

At minimum, you need API keys for at least one AI provider:
# Choose ONE of these AI providers
GOOGLE_API_KEY=your_google_api_key_here
# OR
OPENAI_API_KEY=your_openai_api_key_here

AI Provider Configuration

Google Gemini

Google AI Studio API key for Gemini models. How to get:
  1. Visit Google AI Studio
  2. Click “Create API Key”
  3. Copy the generated key
Supported models:
  • gemini-2.0-flash (default, recommended)
  • gemini-1.5-flash
  • gemini-1.0-pro
Google Gemini offers generous free tier limits and excellent performance for documentation generation.

OpenAI

OpenAI API key for GPT models. Custom OpenAI API endpoint (for OpenAI-compatible services). How to get:
  1. Visit OpenAI Platform
  2. Create new secret key
  3. Copy the key (starts with sk-)
Supported models:
  • gpt-4o (default)
  • gpt-4.1
  • o1
  • o3
  • o4-mini
OpenAI requires a paid account. Free tier users cannot access the API.

OpenRouter

OpenRouter API key for access to multiple model providers. How to get:
  1. Sign up at OpenRouter
  2. Go to Keys section
  3. Create new API key
Available models:
  • openai/gpt-4o
  • anthropic/claude-3.5-sonnet
  • deepseek/deepseek-r1
  • google/gemini-pro
  • And 100+ more models
OpenRouter provides access to multiple AI providers through a single API, perfect for comparing models.

Azure OpenAI

Azure OpenAI service API key. Your Azure OpenAI resource endpoint URL. API version (e.g., 2024-02-15-preview). How to get:
  1. Create Azure OpenAI resource in Azure Portal
  2. Deploy a model (GPT-4, GPT-3.5-turbo, etc.)
  3. Get endpoint and API key from resource overview
  4. Note the API version from the deployment
Example configuration:
AZURE_OPENAI_API_KEY=abc123def456ghi789
AZURE_OPENAI_ENDPOINT=https://my-resource.openai.azure.com
AZURE_OPENAI_VERSION=2024-02-15-preview

AWS Bedrock

AWS access key for Bedrock access. AWS secret access key. AWS region where Bedrock models are available. Supported models:
  • anthropic.claude-3-sonnet-20240229-v1:0
  • anthropic.claude-3-haiku-20240307-v1:0
  • anthropic.claude-3-opus-20240229-v1:0
  • amazon.titan-text-express-v1

Ollama (Local Models)

Ollama server URL for local AI models. Setup Ollama:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Start service
ollama serve

# Pull a model
ollama pull llama3:8b
Supported models:
  • qwen3:1.7b (lightweight)
  • llama3:8b (balanced)
  • qwen3:8b (high context)

DashScope (Alibaba)

Alibaba DashScope API key for Qwen models. How to get:
  1. Sign up at DashScope
  2. Create API key in console
  3. Add key to environment
Supported models:
  • qwen-plus
  • qwen-turbo
  • deepseek-r1

Server Configuration

Port for the FastAPI backend server. Base URL for API server (used by frontend). Environment mode (development, production, test). Example server configuration:
PORT=8002
SERVER_BASE_URL=https://api.deepwiki.example.com
NODE_ENV=production

Security & Authentication

Authorization Mode

Enable authorization requirement for wiki generation. Secret code required when authorization mode is enabled. Usage:
DEEPWIKI_AUTH_MODE=true
DEEPWIKI_AUTH_CODE=my-secret-code-123
When enabled, users must enter the auth code to generate wikis.
Authorization mode provides basic frontend protection but doesn’t secure direct API access.

Logging & Debugging

Logging verbosity level. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL Path for log file output. Example logging configuration:
LOG_LEVEL=DEBUG
LOG_FILE_PATH=./logs/deepwiki-debug.log
In production, use INFO or WARNING level to reduce log volume.

Advanced Configuration

Directory containing configuration JSON files. Redis connection URL for caching (optional). Example:
DEEPWIKI_CONFIG_DIR=/custom/config/path
REDIS_URL=redis://localhost:6379/0

Environment File Templates

Development

# Development Environment
NODE_ENV=development
LOG_LEVEL=DEBUG
LOG_FILE_PATH=./api/logs/development.log

# Server Config
PORT=8001
SERVER_BASE_URL=http://localhost:8001

# API Keys
GOOGLE_API_KEY=your_development_google_key
OPENAI_API_KEY=your_development_openai_key

# Local Ollama
OLLAMA_HOST=http://localhost:11434

# No authentication for dev
DEEPWIKI_AUTH_MODE=false

Production

# Production Environment  
NODE_ENV=production
LOG_LEVEL=INFO
LOG_FILE_PATH=/var/log/deepwiki/application.log

# Server Config
PORT=8001
SERVER_BASE_URL=https://api.yourdomain.com

# Production API Keys
GOOGLE_API_KEY=your_production_google_key
OPENAI_API_KEY=your_production_openai_key
AZURE_OPENAI_API_KEY=your_azure_key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
AZURE_OPENAI_VERSION=2024-02-15-preview

# Enable authentication
DEEPWIKI_AUTH_MODE=true
DEEPWIKI_AUTH_CODE=your_secure_production_code

# Redis caching
REDIS_URL=redis://redis-server:6379/0

Docker

# Docker Environment
NODE_ENV=production
LOG_LEVEL=INFO

# Container networking
PORT=8001
SERVER_BASE_URL=http://deepwiki-api:8001

# API Keys
GOOGLE_API_KEY=your_google_key
OPENAI_API_KEY=your_openai_key

# External Ollama
OLLAMA_HOST=http://ollama-server:11434

# Persistent data
LOG_FILE_PATH=/app/logs/application.log
DEEPWIKI_CONFIG_DIR=/app/config

Validation & Testing

1

Validate Environment

# Check environment variables are loaded
python -c "
import os
from dotenv import load_dotenv
load_dotenv()

# Check API keys
providers = {
    'Google': os.getenv('GOOGLE_API_KEY'),
    'OpenAI': os.getenv('OPENAI_API_KEY'),
    'OpenRouter': os.getenv('OPENROUTER_API_KEY'),
    'Azure': os.getenv('AZURE_OPENAI_API_KEY'),
}

for name, key in providers.items():
    status = '✓ Configured' if key else '✗ Missing'
    print(f'{name}: {status}')
"
2

Test API Connections

# Test backend startup
python -m api.main

# Check for successful startup messages
# Look for: "Starting Streaming API on port 8001"
# No API key warnings for your configured providers
3

Verify Frontend Connection

# Start frontend
npm run dev

# Test API connection at http://localhost:3000
# Model selection should show your configured providers

Security Best Practices

Troubleshooting

Next Steps