Skip to content

LLM Integration

codebrief is specifically designed to generate high-quality context for Large Language Models (LLMs). This guide shows you how to effectively use codebrief with popular AI tools and services.

Overview

LLMs work best with well-structured, comprehensive context. codebrief transforms your codebase into the perfect format for AI assistance by providing:

  • Structured Information: Clear hierarchy and organization
  • Comprehensive Context: Complete project understanding
  • Focused Content: Only relevant information without noise
  • Standard Formats: Markdown output that LLMs process well

Supported LLM Platforms

codebrief works excellently with:

  • ChatGPT (OpenAI GPT-3.5, GPT-4, GPT-4o)
  • Claude (Anthropic Claude 3, Claude 3.5 Sonnet)
  • GitHub Copilot Chat
  • Codeium
  • Cursor
  • Local Models (Ollama, LM Studio, etc.)

Quick Start for LLMs

Generate Complete Project Context

# Create comprehensive project bundle
codebrief bundle --output project-context.md

# Copy to clipboard (if you have xclip/pbcopy)
cat project-context.md | pbcopy  # macOS
cat project-context.md | xclip -selection clipboard  # Linux

Focused Context for Specific Questions

# Code-focused context
codebrief flatten src/ --include "*.py" --output code-context.md

# Git-focused context for debugging
codebrief git-info --full-diff --output git-context.md

# Dependencies for architecture questions
codebrief deps --output deps-context.md

Platform-Specific Integration

ChatGPT Integration

Best Practices:

  1. Use Bundle Command for comprehensive analysis
  2. Include Git Context for debugging scenarios
  3. Limit Scope for focused questions
# For general code review/analysis
codebrief bundle \
  --output chatgpt-context.md \
  --git-log-count 10

# For specific debugging
codebrief bundle \
  --exclude-deps \
  --git-full-diff \
  --flatten src/specific_module/ \
  --output debug-context.md

Prompting Tips:

I'm working on a Python project. Here's the complete context:

[Paste codebrief output]

Please analyze the code structure and suggest improvements for:
1. Code organization
2. Error handling
3. Testing coverage

Claude Integration

Best Practices:

  1. Use Structured Output - Claude excels with well-organized information
  2. Include Documentation - Claude benefits from README and docs context
  3. Git History for understanding evolution
# Comprehensive analysis for Claude
codebrief bundle \
  --output claude-context.md \
  --git-log-count 15 \
  --flatten docs/ src/ tests/

# Include project documentation
codebrief flatten . \
  --include "*.md" "*.rst" "*.txt" \
  --exclude "**/node_modules/**" \
  --output docs-context.md

Prompting Strategy:

Here's my project context generated by codebrief:

[Paste output]

I need help with [specific task]. Please consider:
- The current project structure
- Recent Git changes
- Existing dependencies
- Code patterns already in use

GitHub Copilot Chat

Best Practices:

  1. Focused Context - Use specific tool outputs
  2. Current Branch Info - Include Git status
  3. Recent Changes - Show recent commits
# Context for Copilot Chat
codebrief git-info \
  --log-count 5 \
  --full-diff \
  --output copilot-context.md

# Code structure for current work
codebrief tree --output structure.txt
codebrief flatten src/ --include "*.py" --output current-code.md

Cursor Integration

Integration Steps:

  1. Generate Context Files
  2. Include in Cursor Project
  3. Reference in Chat
# Create Cursor-friendly context
mkdir .cursor-context
codebrief bundle --output .cursor-context/project-context.md
codebrief tree --output .cursor-context/structure.txt

# Add to .gitignore if needed
echo ".cursor-context/" >> .gitignore

Use Case Patterns

Code Review Assistance

# Complete review context
codebrief bundle \
  --output review-context.md \
  --git-log-count 5 \
  --git-full-diff \
  --flatten src/ tests/

# Prompt for LLM:
# "Please review this code for: security, performance, maintainability"

Debugging Assistance

# Debug-focused context
codebrief git-info \
  --full-diff \
  --diff-options "--stat" \
  --output debug-git.md

codebrief flatten src/ \
  --include "*.py" \
  --output debug-code.md

# Prompt: "I have a bug in [specific area]. Here's my current code and recent changes."

Architecture Planning

# Architecture context
codebrief bundle \
  --exclude-git \
  --output architecture-context.md

# Prompt: "Help me refactor this codebase to improve [specific aspect]"

Documentation Generation

# Documentation context
codebrief bundle \
  --exclude-deps \
  --git-log-count 1 \
  --output docs-context.md

# Prompt: "Generate comprehensive documentation for this project"

Advanced LLM Workflows

Automated Context Generation

#!/bin/bash
# llm-context.sh - Automated context generation

# Generate different context types
codebrief bundle --output full-context.md
codebrief tree --output structure.md
codebrief deps --output dependencies.md
codebrief git-info --output git-context.md

echo "Context files generated:"
echo "- full-context.md (complete project)"
echo "- structure.md (file tree)"
echo "- dependencies.md (dependencies only)"
echo "- git-context.md (git info only)"

CI/CD Integration for Context

# .github/workflows/context-generation.yml
name: Generate LLM Context

on:
  pull_request:
    branches: [main]

jobs:
  generate-context:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install codebrief
        run: |
          pip install poetry
          poetry install

      - name: Generate PR Context
        run: |
          poetry run codebrief bundle \
            --output pr-context.md \
            --git-log-count 10 \
            --git-full-diff

      - name: Add to PR Comment
        run: |
          echo "## 🤖 LLM Context Generated" >> comment.md
          echo "Use this context for AI-assisted code review:" >> comment.md
          echo "\`\`\`" >> comment.md
          cat pr-context.md >> comment.md
          echo "\`\`\`" >> comment.md

Context Templates

Create reusable context templates:

# templates/code-review.sh
codebrief bundle \
  --output contexts/code-review-context.md \
  --git-log-count 5 \
  --git-full-diff \
  --flatten src/ tests/

# templates/debugging.sh
codebrief git-info \
  --full-diff \
  --output contexts/debug-context.md

codebrief flatten src/ \
  --include "*.py" \
  --exclude "*test*" \
  --output contexts/code-context.md

Output Optimization for LLMs

Token Efficiency

# Efficient context for token limits
codebrief flatten src/ \
  --include "*.py" \
  --exclude "*test*" "*__pycache__*" \
  --output efficient-context.md

# Focus on recent changes only
codebrief git-info \
  --log-count 3 \
  --diff-options "--stat" \
  --output recent-changes.md

Structured Organization

codebrief automatically creates well-structured output:

# Project Bundle

## Table of Contents
- [Directory Tree](#directory-tree)
- [Git Context](#git-context)
- [Dependencies](#dependencies)
- [Code Files](#code-files)

## Directory Tree
[Clean file structure]

## Git Context
[Recent commits and changes]

## Dependencies
[Project dependencies by type]

## Code Files
[Organized by directory]

Best Practices

Context Size Management

  1. Use Specific Tools for focused questions
  2. Exclude Irrelevant Sections with bundle options
  3. Filter File Types based on your question
  4. Limit Git History to recent relevant commits

Quality Context Tips

  1. Include .llmignore to exclude noise
  2. Use Configuration for consistent defaults
  3. Update Regularly for current context
  4. Test with LLMs to refine your approach

Security Considerations

# Ensure sensitive files are excluded
echo "*.env" >> .llmignore
echo "secrets/" >> .llmignore
echo "*.key" >> .llmignore
echo "credentials.*" >> .llmignore

# Verify exclusions work
codebrief tree  # Check output for sensitive files

Troubleshooting

Context Too Large

# Reduce context size
codebrief bundle \
  --exclude-deps \
  --exclude-git \
  --flatten src/core/ \
  --output minimal-context.md

Missing Important Context

# Ensure nothing important is excluded
codebrief tree  # Verify file structure
cat .llmignore     # Check ignore patterns

LLM Not Understanding Context

  1. Add More Structure - Use bundle command
  2. Include Documentation - Add README, comments
  3. Provide Recent Changes - Include Git context
  4. Clear Scope - Focus on specific areas

Example Prompts

General Analysis

I'm sharing my project context generated by codebrief. Please analyze:

1. Code organization and structure
2. Potential improvements
3. Best practices compliance
4. Security considerations

[Paste codebrief output]

Specific Feature Development

Here's my current project context:

[Paste codebrief output]

I want to add [specific feature]. Please:
1. Suggest where to implement it
2. Identify required changes
3. Recommend testing approach
4. Consider integration points

Bug Investigation

I have a bug in [specific area]. Here's the relevant context:

[Paste focused codebrief output]

The issue is: [describe problem]
Expected: [expected behavior]
Actual: [actual behavior]

Please help investigate and suggest fixes.

Next Steps