LangChain Workflow Patterns
Workflow patterns are proven approaches to common AI tasks. Instead of starting from scratch, you can use these battle-tested patterns as templates and adapt them to your specific needs.
Think of patterns like recipes - they give you the basic structure and ingredients, but you can adjust them to taste.
Content Processing Patterns
Section titled “Content Processing Patterns”Simple Analysis Chain
Section titled “Simple Analysis Chain”Purpose: Analyze any content with AI and get structured results
graph LR
Input[Content Input] --> Extract[Extract/Load]
Extract --> Analyze[AI Analysis]
Analyze --> Format[Format Results]
Format --> Output[Structured Output]
style Analyze fill:#6d28d9,stroke:#fff,color:#fff
Components used:
- Content source (GetAllTextFromLink, Document Loader)
- Basic LLM Chain (analysis)
- EditFields (formatting)
Perfect for:
- Blog post analysis
- Product review summarization
- Document classification
- Content quality assessment
Goal: Extract key insights from blog posts
Workflow:
- GetAllTextFromLink → extract blog content
- Basic LLM Chain → analyze with prompt: “Extract main topic, key points, target audience, and writing style”
- EditFields → structure results into categories
Output: Structured analysis with topic, insights, and recommendations
Goal: Summarize product reviews into actionable insights
Workflow:
- Multiple GetAllTextFromLink → collect reviews from different sources
- Merge → combine all review content
- Basic LLM Chain → summarize with prompt: “Identify common themes, pros/cons, and overall sentiment”
- EditFields → format into pros, cons, and recommendation score
Output: Comprehensive review summary with ratings
Multi-Step Processing Pipeline
Section titled “Multi-Step Processing Pipeline”Purpose: Complex content processing that requires multiple AI operations
graph TD
Input[Raw Content] --> Step1[Extract Key Info]
Step1 --> Step2[Analyze Sentiment]
Step2 --> Step3[Generate Summary]
Step3 --> Step4[Create Action Items]
Step4 --> Output[Complete Analysis]
style Step1 fill:#e1f5fe
style Step2 fill:#e8f5e8
style Step3 fill:#fff3e0
style Step4 fill:#f3e5f5
Real-world example: Customer Feedback Processor
- Extract: GetAllTextFromLink → collect feedback from multiple sources
- Categorize: Basic LLM Chain → classify feedback types (bug, feature request, complaint)
- Analyze: Sentiment Analysis Chain → determine emotional tone
- Prioritize: Basic LLM Chain → assign urgency scores
- Summarize: Basic LLM Chain → create executive summary
- Format: EditFields → structure for reporting
Knowledge-Based Patterns
Section titled “Knowledge-Based Patterns”Smart Q&A System
Section titled “Smart Q&A System”Purpose: Answer questions accurately using your documents
graph TD
Docs[Your Documents] --> Load[Document Loader]
Load --> Split[Text Splitter]
Split --> Embed[Create Embeddings]
Embed --> Store[Vector Store]
Question[User Question] --> Search[Search Documents]
Store --> Search
Search --> Context[Relevant Context]
Context --> Answer[AI Answer + Sources]
style Store fill:#6d28d9,stroke:#fff,color:#fff
style Answer fill:#e8f5e8
Components used:
- Document Loader (prepare content)
- Text Splitter (chunk documents)
- Embeddings (create searchable vectors)
- Vector Store (smart storage)
- RAG Node (question answering)
Setup process:
-
Prepare documents: Load your knowledge base using Document Loader
-
Chunk content: Use Recursive Character Text Splitter to break into searchable pieces
-
Create embeddings: Convert chunks to vectors using OpenAI or Ollama Embeddings
-
Store vectors: Save in Local Knowledge or cloud vector store
-
Set up Q&A: Connect RAG Node to search and answer questions
Perfect for:
- Company knowledge bases
- Technical documentation search
- Customer support automation
- Research assistance
Contextual Knowledge Assistant
Section titled “Contextual Knowledge Assistant”Purpose: AI that builds knowledge over time and provides increasingly relevant answers
graph LR
Question[User Question] --> Memory[Check Memory]
Memory --> Search[Search Knowledge]
Search --> Combine[Combine Context]
Combine --> Answer[Contextual Answer]
Answer --> Update[Update Memory]
Update --> Memory
style Memory fill:#6d28d9,stroke:#fff,color:#fff
style Search fill:#6d28d9,stroke:#fff,color:#fff
Components used:
- Vector Store (knowledge base)
- Conversation Memory (context tracking)
- RAG Node (intelligent search)
- Basic LLM Chain (response generation)
Example: Personal Research Assistant
- Remembers what topics you’re interested in
- Builds knowledge about your research areas over time
- Provides increasingly relevant and personalized responses
- Connects new information to previous conversations
Agent-Based Patterns
Section titled “Agent-Based Patterns”Goal-Oriented Research Agent
Section titled “Goal-Oriented Research Agent”Purpose: AI that can plan and execute complex research tasks
graph TD
Goal[Research Goal] --> Plan[Create Plan]
Plan --> Tool1[Web Search]
Plan --> Tool2[Content Extract]
Plan --> Tool3[Data Analysis]
Tool1 --> Evaluate[Evaluate Progress]
Tool2 --> Evaluate
Tool3 --> Evaluate
Evaluate --> Complete{Goal Achieved?}
Complete -->|No| Adapt[Adapt Plan]
Complete -->|Yes| Report[Final Report]
Adapt --> Plan
style Plan fill:#6d28d9,stroke:#fff,color:#fff
style Evaluate fill:#6d28d9,stroke:#fff,color:#fff
Components used:
- Tools Agent (intelligent coordinator)
- Web search tools (information gathering)
- Content extraction tools (data collection)
- Analysis tools (insight generation)
- Memory (maintain context)
Example workflow:
- Goal: “Research competitor pricing for SaaS products”
- Planning: Agent decides to search for competitors, visit their sites, extract pricing
- Execution: Uses web search → content extraction → data analysis
- Adaptation: If pricing not found on main pages, tries pricing pages or contact forms
- Reporting: Compiles comprehensive competitive analysis
Multi-Tool Coordination Agent
Section titled “Multi-Tool Coordination Agent”Purpose: AI that can use multiple tools intelligently to accomplish complex tasks
Available tool categories:
- Information gathering: Web search, content extraction, API calls
- Data processing: Analysis, formatting, calculations
- Content creation: Writing, summarization, report generation
- Browser automation: Form filling, navigation, interaction
Pattern structure:
- Goal definition: Clear objective for the agent
- Tool selection: Choose relevant tools for the domain
- Execution limits: Set maximum steps to prevent infinite loops
- Progress monitoring: Track agent decisions and results
- Result compilation: Format final output appropriately
Conversational Patterns
Section titled “Conversational Patterns”Context-Aware Chatbot
Section titled “Context-Aware Chatbot”Purpose: Conversational AI that maintains context and can take actions
graph LR
Message[User Message] --> Memory[Load Context]
Memory --> Understand[Understand Intent]
Understand --> Decide[Decide Action]
Decide --> Tools[Use Tools if Needed]
Decide --> Respond[Generate Response]
Tools --> Respond
Respond --> Save[Save to Memory]
Save --> Memory
style Memory fill:#6d28d9,stroke:#fff,color:#fff
style Understand fill:#6d28d9,stroke:#fff,color:#fff
Components used:
- Chat Model (conversation)
- Conversation Memory (context)
- Tools (actions)
- Basic LLM Chain (response generation)
Conversation flow example:
User: "Find information about our refund policy"Bot: "I'll search our knowledge base for refund information..." [Uses RAG tool to search company docs] "According to our policy document, customers can request refunds within 30 days..."
User: "What if it's been 35 days?"Bot: "Based on our previous discussion about the 30-day policy, requests after 30 days require manager approval..."Specialized Domain Assistant
Section titled “Specialized Domain Assistant”Purpose: Expert AI for specific domains (legal, medical, technical, etc.)
Pattern components:
- Domain knowledge: Specialized vector store with expert content
- Domain tools: Specific tools for the field (calculators, databases, APIs)
- Safety checks: Validation and disclaimer systems
- Expert prompting: Specialized prompts for domain expertise
Performance Optimization Patterns
Section titled “Performance Optimization Patterns”Efficient Processing Pipeline
Section titled “Efficient Processing Pipeline”Purpose: Handle large volumes of content efficiently
Optimization strategies:
- Batch processing: Process multiple items together
- Caching: Store frequently used results
- Streaming: Process content as it arrives
- Parallel processing: Use multiple chains simultaneously
Cost-Optimized Workflows
Section titled “Cost-Optimized Workflows”Purpose: Minimize AI API costs while maintaining quality
Cost reduction techniques:
- Model selection: Use appropriate model for each task (GPT-3.5 for simple, GPT-4 for complex)
- Prompt optimization: Shorter, more focused prompts
- Result caching: Avoid re-processing identical content
- Local models: Use Ollama for privacy and cost control
Pattern Selection Guide
Section titled “Pattern Selection Guide”Choose based on your goal:
Section titled “Choose based on your goal:”Content Analysis → Simple Analysis Chain or Multi-Step Pipeline
Question Answering → Smart Q&A System or Contextual Knowledge Assistant
Research Tasks → Goal-Oriented Research Agent or Multi-Tool Coordination
Conversations → Context-Aware Chatbot or Specialized Domain Assistant
High Volume → Efficient Processing Pipeline with optimization patterns
Consider your constraints:
Section titled “Consider your constraints:”Budget-conscious → Use local models and cost-optimized patterns Privacy-focused → Local Knowledge + Ollama models Speed-critical → Simpler patterns with faster models Quality-critical → More sophisticated patterns with premium models
These patterns provide proven starting points for building intelligent workflows. Mix, match, and modify them to create solutions perfectly suited to your specific needs.