Get DeepWiki-Open up and running quickly with either Docker (recommended) or manual setup. The fastest way to get started with DeepWiki-Open is using Docker Compose.
1

Clone the Repository

git clone https://github.com/AsyncFuncAI/deepwiki-open.git
cd deepwiki-open
Repository cloned successfully
2

Configure Environment Variables

Create a .env file with your API keys:
.env
# Required: Choose at least one AI provider
GOOGLE_API_KEY=your_google_api_key
OPENAI_API_KEY=your_openai_api_key

# Optional: Additional providers
OPENROUTER_API_KEY=your_openrouter_api_key
AZURE_OPENAI_API_KEY=your_azure_openai_api_key
AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint
AZURE_OPENAI_VERSION=your_azure_openai_version
OLLAMA_HOST=http://localhost:11434
At minimum, you need either GOOGLE_API_KEY or OPENAI_API_KEY to get started.
3

Start with Docker Compose

docker-compose up
This will start both the backend API server (port 8001) and frontend web app (port 3000).
4

Access DeepWiki

Open your browser to http://localhost:3000
You should see the DeepWiki interface ready to generate your first wiki!

Manual Setup

For development or custom configurations, you can set up DeepWiki manually.
1

Set Up Environment Variables

Create a .env file in the project root:
GOOGLE_API_KEY=your_google_api_key
OPENAI_API_KEY=your_openai_api_key
2

Start the Backend API

# Install Python dependencies
pip install -r api/requirements.txt

# Start the API server
python -m api.main
The API server will start on port 8001 by default.
3

Start the Frontend

Open a new terminal and run:
npm install
npm run dev
The frontend will be available at http://localhost:3000

Generate Your First Wiki

1

Enter Repository URL

In the DeepWiki interface, enter a GitHub, GitLab, or BitBucket repository URL:
  • https://github.com/openai/codex
  • https://github.com/microsoft/autogen
  • https://gitlab.com/gitlab-org/gitlab
  • https://bitbucket.org/redradish/atlassian_app_versions
Start with a smaller repository for your first test to see faster results.
2

Add Access Token (For Private Repos)

If accessing a private repository:
  1. Click ”+ Add access tokens”
  2. Enter your GitHub, GitLab, or BitBucket personal access token
Ensure your token has appropriate repository access permissions.
3

Select AI Model Provider

Choose your preferred AI model provider and model:
  • Default: gpt-4o
  • Also available: o1, o3, o4-mini
  • Best for: High-quality, detailed documentation
  • Access to: Claude, Llama, Mistral, and 100+ models
  • Best for: Trying different models without multiple API keys
4

Generate Wiki

Click “Generate Wiki” and watch the magic happen!
Generation time varies by repository size. Smaller repos take 30 seconds to 2 minutes, while larger ones may take 5-10 minutes.

API Key Setup

  1. Visit Google AI Studio
  2. Create a new API key
  3. Add to .env as GOOGLE_API_KEY=your_key_here
Google Gemini offers generous free tier limits and fast performance.

Verification

Visit http://localhost:8001/docs to see the FastAPI documentation and test endpoints.
# Test API health
curl http://localhost:8001/health
The frontend at http://localhost:3000 should show:
  • Repository input field
  • Model selection dropdown
  • Generate Wiki button
Check that your environment variables are loaded correctly:
# In the API logs, you should see
python -m api.main
# INFO - Starting Streaming API on port 8001
# No warnings about missing API keys for your chosen provider

Next Steps

Troubleshooting

  • Ensure the backend is running on port 8001
  • Check firewall settings
  • Verify no other services are using port 8001
  • Check .env file exists in project root
  • Verify API keys are correctly formatted
  • Ensure no extra spaces in environment variables
  • Double-check API key accuracy
  • Verify API key permissions and quotas
  • Test API key with provider’s documentation
For more detailed troubleshooting, see the Troubleshooting Guide.