API Documentation
One endpoint. Paste the markdown into your LLM. Done.
Quickstart
Get relevant code from any GitHub repo in 30 seconds:
curl https://contextpacker.com/v1/packs \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"repo_url": "https://github.com/pallets/flask",
"query": "How does routing work?"
}'
That's it. The response includes a markdown field — paste it directly into your LLM prompt.
Don't have an API key? Request one here — we'll send it within 24 hours. Free tier: 50 packs/month.
Authentication
All API requests require an API key in the X-API-Key header:
X-API-Key: cp_live_xxxxxxxxxxxx
Store your key in an environment variable:
export CONTEXTPACKER_API_KEY="cp_live_xxxxxxxxxxxx"
POST /v1/packs
Create a context pack from a GitHub repository.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| repo_url | string | Required | GitHub HTTPS URL (e.g., https://github.com/owner/repo) |
| query | string | Required | Natural language question (1-2000 chars) |
| max_tokens | integer | Optional | Max context tokens. Default: 6000, Range: 100-100,000 |
| vcs | object | Optional | For private repos. See Private Repos |
Example Request
{
"repo_url": "https://github.com/pallets/flask",
"query": "How does the routing system work?",
"max_tokens": 8000
}
Response Format
A successful response (HTTP 200) contains:
{
"id": "pack_a1b2c3d4e5f6",
"engine_version": "0.1.0",
"markdown": "### src/flask/app.py\n```python\nclass Flask:\n ...",
"files": [
{
"path": "src/flask/app.py",
"tokens": 1823,
"language": "python",
"relevance_score": 0.95,
"reason": "Core Flask application class with route registration"
}
],
"stats": {
"tokens_packed": 4521,
"tokens_raw_repo": 285000,
"tokens_saved": 280479,
"files_selected": 6,
"files_considered": 127,
"repo_clone_ms": 1250,
"selector_ms": 890,
"packing_ms": 45,
"cache_hit": false,
"truncated": false,
"files_truncated": 0,
"downstream_input_price_per_1M": 2.0,
"selector_llm_cost_usd": 0.0002,
"estimated_gross_savings_usd": 0.56,
"estimated_net_savings_usd": 0.56
}
}
Key Fields
| Field | Description |
|---|---|
| markdown | The main output. Paste this into your LLM prompt. |
| files[] | List of selected files with path, tokens, language, relevance_score, reason. |
| stats.tokens_packed | Total tokens in the markdown output. |
| stats.tokens_saved | Tokens saved vs. raw repo (tokens_raw_repo - tokens_packed). |
| stats.cache_hit | Whether the repo index was served from cache (faster). |
| stats.selector_llm_cost_usd | Cost of the selector LLM call (typically ~$0.0002). |
Private Repositories
To access private GitHub repos, include a vcs object with your Personal Access Token:
{
"repo_url": "https://github.com/your-org/private-repo",
"query": "How does authentication work?",
"vcs": {
"provider": "github",
"token": "ghp_xxxxxxxxxxxx",
"branch": "main" // optional, defaults to repo's default branch
}
}
Security
- Your token is used only for cloning
- Never logged, never stored, never persisted
- Repo is cloned to a temp directory and deleted immediately
- Use short-lived tokens (1-hour expiry) for extra safety
Creating a GitHub PAT
- Go to GitHub Settings → Tokens
- Click "Generate new token (Fine-grained)"
- Set expiration to 1 hour or 1 day
- Select your repository
- Grant Contents: Read-only permission
- Generate and copy the token
Errors
All errors return a JSON object with error and message fields:
| HTTP | Error Code | Cause & Fix |
|---|---|---|
| 400 | INVALID_URL | URL must be https://github.com/owner/repo |
| 401 | UNAUTHORIZED | Missing or invalid X-API-Key header |
| 401 | AUTH_FAILED | GitHub PAT is invalid or lacks permissions. Need repo or contents:read scope. |
| 404 | REPO_NOT_FOUND | Repository doesn't exist or your token can't access it |
| 422 | REPO_CLONE_TIMEOUT | Clone took >15s. Repo may be too large. |
| 422 | REPO_TOO_LARGE | Repository exceeds size limits |
| 422 | REPO_CLONE_FAILED | Git clone failed (network issue, repo corrupt, etc.) |
| 422 | EMPTY_REPO | Repository is empty or contains no indexable files |
| 429 | RATE_LIMITED | Monthly quota exceeded. Upgrade plan. |
| 503 | QUEUE_TIMEOUT | Server busy. Retry in 30 seconds. |
| 504 | REQUEST_TIMEOUT | Request took >60s. Try smaller max_tokens. |
Python Example
Copy this helper function into your project:
import os
import requests
def get_context(
repo: str,
question: str,
max_tokens: int = 8000,
github_token: str = None
) -> str:
"""
Get relevant code context from any GitHub repo.
Args:
repo: GitHub URL (e.g., "https://github.com/owner/repo")
question: Natural language question about the codebase
max_tokens: Maximum context size (default: 8000)
github_token: Optional PAT for private repos
Returns:
Markdown string ready to paste into an LLM prompt
"""
payload = {
"repo_url": repo,
"query": question,
"max_tokens": max_tokens
}
if github_token:
payload["vcs"] = {"provider": "github", "token": github_token}
response = requests.post(
"https://contextpacker.com/v1/packs",
headers={"X-API-Key": os.environ["CONTEXTPACKER_API_KEY"]},
json=payload,
timeout=60
)
response.raise_for_status()
return response.json()["markdown"]
# Usage
if __name__ == "__main__":
context = get_context(
"https://github.com/pallets/flask",
"How does routing work?"
)
print(context)
Install dependencies:
pip install requests
JavaScript Example
async function getContext(repo, question, options = {}) {
const { maxTokens = 8000, githubToken } = options;
const payload = {
repo_url: repo,
query: question,
max_tokens: maxTokens
};
if (githubToken) {
payload.vcs = { provider: "github", token: githubToken };
}
const response = await fetch("https://contextpacker.com/v1/packs", {
method: "POST",
headers: {
"X-API-Key": process.env.CONTEXTPACKER_API_KEY,
"Content-Type": "application/json"
},
body: JSON.stringify(payload)
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.message || "Request failed");
}
const data = await response.json();
return data.markdown;
}
// Usage
const context = await getContext(
"https://github.com/expressjs/express",
"How does middleware work?"
);
console.log(context);
cURL Example
# Set your API key
export CONTEXTPACKER_API_KEY="cp_live_xxxxxxxxxxxx"
# Get context from a public repo
curl -s https://contextpacker.com/v1/packs \
-H "X-API-Key: $CONTEXTPACKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"repo_url": "https://github.com/pallets/flask",
"query": "How does routing work?",
"max_tokens": 8000
}' | jq -r .markdown
For private repos, add the vcs field:
curl -s https://contextpacker.com/v1/packs \
-H "X-API-Key: $CONTEXTPACKER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"repo_url": "https://github.com/your-org/private-repo",
"query": "How does authentication work?",
"vcs": {
"provider": "github",
"token": "'"$GITHUB_PAT"'"
}
}' | jq -r .markdown
Using with OpenAI
Here's a complete example that fetches context and asks GPT-4:
import os
import requests
from openai import OpenAI
def ask_codebase(repo: str, question: str) -> str:
"""Ask a question about any GitHub codebase."""
# 1. Get relevant context from ContextPacker
pack_response = requests.post(
"https://contextpacker.com/v1/packs",
headers={"X-API-Key": os.environ["CONTEXTPACKER_API_KEY"]},
json={"repo_url": repo, "query": question, "max_tokens": 8000},
timeout=60
)
pack_response.raise_for_status()
context = pack_response.json()["markdown"]
# 2. Ask GPT-4 with the context
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "system",
"content": "You are a helpful assistant that answers questions about code. Use the provided code context to give accurate answers."
},
{
"role": "user",
"content": f"Here is the relevant code from the repository:\n\n{context}\n\n---\n\nQuestion: {question}"
}
]
)
return response.choices[0].message.content
# Usage
answer = ask_codebase(
"https://github.com/pallets/flask",
"How does Flask handle request context?"
)
print(answer)
Best Practices
Write specific questions
The more specific your question, the better the file selection.
"How does this work?"
"How does JWT authentication work in the login flow?"
Right-size your token budget
More tokens = more context, but diminishing returns past ~10K for focused questions.
- 2,000–4,000: Single function or small feature
- 6,000–10,000: Feature with dependencies (recommended default)
- 12,000–20,000: Architecture overview or multi-file feature
Use short-lived PATs for private repos
Create fine-grained tokens with 1-hour expiry and read-only contents permission. Revoke after testing.
Cache responses for repeated questions
If you're asking similar questions about the same repo, cache the markdown response on your side to save API calls.
Pricing
| Plan | Credits | Price |
|---|---|---|
| Signup | 100 credits | Free (no CC) |
| Top up | 1,000 credits | $9 |
1 credit = 1 API call. Credits never expire.
Ready to try it?
Get a free API key (50 packs/month) — we'll send it within 24 hours.
Request API Key →