API Base URL
For production deployments, replace with your actual API server URL.API Architecture
The DeepWiki API is organized into several key areas:Wiki Generation
Generate comprehensive documentation wikis from repository URLs
Chat & RAG
Interactive Q&A with repository content using RAG
Model Management
Configure and manage AI model providers
WebSocket API
Real-time streaming for generation progress and chat
Quick Start
Authentication
Most endpoints require authentication via environment-configured API keys. The API validates your configured providers automatically.Basic Wiki Generation
Core Endpoints
Wiki Generation
POST /wiki/generate
POST /wiki/generate
Generate a comprehensive wiki from a repository URL.Request Body:
repo_url
(string, required): Repository URLmodel_provider
(string): AI provider (google
,openai
,openrouter
,azure
,ollama
)model_name
(string): Specific model to useforce_regenerate
(boolean): Force regeneration even if cachedaccess_token
(string): Repository access token for private reposauth_code
(string): Authorization code (if auth mode enabled)
GET /wiki/projects
GET /wiki/projects
List all processed repositories and their wiki status.Query Parameters:
limit
(integer): Number of results to returnoffset
(integer): Pagination offset
GET /wiki/{project_id}
GET /wiki/{project_id}
Retrieve a specific generated wiki by project ID.Path Parameters:
project_id
(string): Unique project identifier
Chat & RAG
POST /chat/stream
POST /chat/stream
Stream chat responses using RAG on repository content.Request Body:
message
(string, required): User questionrepo_url
(string, required): Repository URL for contextconversation_history
(array): Previous conversation messagesmodel_provider
(string): AI provider for responsesdeep_research
(boolean): Enable multi-turn research mode
Model Configuration
GET /models/config
GET /models/config
Get available model providers and configurations.Response: Available providers, models, and their parameters
POST /models/validate
POST /models/validate
Validate API keys and model availability.Request Body:
provider
(string): Provider to validatemodel_name
(string): Specific model to test
Data Models
WikiPage
Unique identifier for the wiki page
Human-readable page title
Full page content in Markdown format with Mermaid diagrams
Source file paths that contributed to this page
Page importance level:
high
, medium
, or low
Array of related page IDs for cross-references
RepoInfo
Full repository URL
Repository name
Repository owner/organization
Git platform:
github
, gitlab
, or bitbucket
Whether the repository is private
Default branch name (usually
main
or master
)Error Handling
The API uses standard HTTP status codes and returns detailed error information:Common Error Codes
400 Bad Request
400 Bad Request
Common causes:
- Invalid repository URL format
- Missing required parameters
- Invalid model provider/name combination
401 Unauthorized
401 Unauthorized
404 Not Found
404 Not Found
Common causes:
- Repository doesn’t exist
- Repository is private and requires access token
- Wiki not found for the given project ID
429 Too Many Requests
429 Too Many Requests
Common causes:
- API rate limits exceeded
- AI provider rate limits reached
500 Internal Server Error
500 Internal Server Error
Common causes:
- AI model generation failures
- Repository processing errors
- Configuration issues
Rate Limits
Rate limits depend on your AI provider’s limits. DeepWiki doesn’t impose additional rate limits.
Provider Rate Limits
- GPT-4: 500 requests/minute, 30,000 tokens/minute
- GPT-3.5: 3,500 requests/minute, 90,000 tokens/minute
- Varies by usage tier and model