Skip to content

LangChain Components Guide

LangChain components are the building blocks of intelligent workflows. Each component has a specific purpose, and you combine them to create sophisticated AI applications without writing any code.

Think of components like specialized tools in a workshop - each one does something specific really well, and you choose the right combination for your project.

Overview of different LangChain components and their purposes

The “brain” that powers your AI workflows:

Best for: General intelligence, creative tasks, conversation

Available models:

  • GPT-4: Most capable, best reasoning
  • GPT-3.5 Turbo: Fast, cost-effective
  • GPT-4 Turbo: Balance of capability and speed

Perfect for:

  • Content creation and editing
  • Complex analysis and reasoning
  • Conversational interfaces
  • Creative writing tasks

Pre-built sequences for common AI tasks:

Chain TypePurposeWhen to Use
Basic LLM ChainSimple AI text processingContent analysis, summarization, generation
Q&A ChainAnswer questions about contentDocument analysis, customer support
Summarization ChainCreate summaries of long contentReport generation, content curation
Sentiment AnalysisDetermine emotional toneSocial media monitoring, feedback analysis

AI that can think, plan, and use tools:

graph TD
    Goal[User Goal] --> Think[Agent Thinks]
    Think --> Plan[Creates Plan]
    Plan --> Select[Selects Tools]
    Select --> Execute[Executes Actions]
    Execute --> Check[Checks Results]
    Check --> Success{Goal Achieved?}
    Success -->|No| Adapt[Adapts Plan]
    Success -->|Yes| Complete[Task Complete]
    Adapt --> Select
    
    style Think fill:#6d28d9,stroke:#fff,color:#fff
    style Plan fill:#6d28d9,stroke:#fff,color:#fff

Agent capabilities:

  • Tool Selection: Automatically chooses the right tools for each task
  • Planning: Breaks complex goals into manageable steps
  • Adaptation: Adjusts approach when things don’t go as expected
  • Reasoning: Explains its decisions and thought process

Give your AI the ability to remember and learn:

Purpose: Remember chat history and context

How it works:

  • Stores recent messages in conversation
  • Provides context to AI for relevant responses
  • Maintains conversation flow naturally

Best for:

  • Chatbots and virtual assistants
  • Customer support systems
  • Interactive workflows

Example:

User: "What's the weather like?"
AI: "It's sunny and 75°F today."
User: "What about tomorrow?"
AI: "Tomorrow will be cloudy with a high of 68°F." (remembers location context)

Smart document storage that understands meaning:

Store TypeBest ForKey Features
Local KnowledgeBrowser-based, privateOffline, no external dependencies
PineconeProduction, scalableCloud-hosted, high performance
SupabaseFull-stack appsDatabase + vector search
QdrantSelf-hosted, flexibleOpen source, customizable

Convert text into searchable vectors:

ModelBest ForStrengths
OpenAI EmbeddingsGeneral purposeHigh quality, widely compatible
Ollama EmbeddingsLocal, privateOffline, no API costs
Cohere EmbeddingsMultilingualStrong non-English support
graph LR
    Text["'Customer is unhappy'"] --> Embed[Embedding Model]
    Embed --> Vector["[0.2, 0.8, 0.1, ...]"]
    Vector --> Search[Vector Search]
    Search --> Similar["'Client dissatisfied'<br/>'User frustrated'"]
    
    style Embed fill:#6d28d9,stroke:#fff,color:#fff
    style Similar fill:#e8f5e8

Extend what your AI can do:

  • Wikipedia: Look up factual information
  • SerpAPI: Perform web searches
  • Calculator: Handle mathematical calculations
  • Wolfram Alpha: Complex computations and data
  • GetAllTextFromLink: Extract text content from web pages
  • GetHTMLFromLink: Get structured HTML data
  • GetImagesFromLink: Collect images from pages
  • FormFiller: Automatically fill web forms
  • EditFields: Transform and format data
  • Code Tool: Execute custom logic
  • Filter: Remove unwanted data
  • Merge: Combine multiple data sources

Handle different types of content:

Loader TypeHandlesUse Cases
Default Document LoaderPlain text, web contentGeneral content processing
GitHub Document LoaderCode repositoriesDocumentation analysis, code review
PDF LoaderPDF documentsReport analysis, research papers

Break large documents into manageable chunks:

Splitter TypeHow It WorksBest For
Character Text SplitterSplits by character countSimple, predictable chunks
Recursive Character SplitterRespects structure (paragraphs, sentences)Most content types
Token SplitterSplits by AI token limitsOptimizing AI processing costs

Components: GetAllTextFromLink + Basic LLM Chain + EditFields Purpose: Extract and analyze web content automatically

Components: Document Loader + Embeddings + Vector Store + RAG Node Purpose: Answer questions about your documents accurately

Components: Tools Agent + Web Search + Content Extraction + Memory Purpose: Conduct research that adapts to what it discovers

Components: Chat Model + Conversation Memory + Multiple Tools Purpose: Helpful assistant that remembers context and can take actions

  • Speed vs Quality: GPT-3.5 Turbo for speed, GPT-4 for quality
  • Cost vs Capability: Local models for cost control, cloud models for capability
  • Privacy vs Convenience: Local processing for privacy, cloud for convenience
  • Short conversations: Simple conversation memory
  • Long conversations: Summary memory to manage costs
  • Knowledge building: Vector memory for learning over time
  • Small datasets (< 1000 docs): Local Knowledge
  • Medium datasets (1000-100k docs): Pinecone or Supabase
  • Large datasets (100k+ docs): Specialized vector databases

Each component serves a specific purpose in building intelligent workflows. Understanding their strengths helps you choose the right combination for your specific needs.