Skip to content

LangChain concepts in <Agentic WorkFlow>

This page explains how LangChain concepts and features map to Agentic WorkFlow nodes for browser-based AI workflows.

This page includes lists of the LangChain-focused nodes in Agentic WorkFlow. You can use any browser extension node in a workflow where you interact with LangChain, to link LangChain to web content and browser context manipulation. The LangChain features work seamlessly with browser extension capabilities.

/// note | Agentic WorkFlow implements LangChain JS This feature is Agentic WorkFlow’s implementation of LangChain’s JavaScript framework optimized for browser environments. ///

Agentic WorkFlow’s LangChain implementation includes specialized browser extension nodes that enable AI workflows to interact with web content:

  • Text Extraction Nodes: Extract selected text or full page content for AI processing
  • HTML Processing Nodes: Capture and analyze HTML structure with AI models
  • Link Collection Nodes: Gather and process links for AI-powered navigation
  • Image Processing Nodes: Collect and analyze images from web pages
  • Content Analysis: Use LangChain agents to analyze web page content extracted via browser nodes
  • Smart Extraction: Combine text splitters with browser content extraction for intelligent data processing
  • Context-Aware AI: Leverage browser context (current page, selected text) to provide more relevant AI responses
  • Interactive Processing: Create AI workflows that respond to user interactions with web content

Browser extension workflows can be triggered by various user interactions:

  • Content Selection: Trigger AI workflows when users select text on web pages
  • Page Load: Automatically process page content with AI when pages load
  • User Actions: Respond to clicks, form submissions, or other browser events
  • Context Menu: Provide AI-powered options in browser context menus

Agentic WorkFlow provides AI nodes that work seamlessly with browser extension capabilities. These nodes can process web content extracted through browser context manipulation.

These nodes form the core of AI workflows and can process data from browser extension nodes.

A chain is a series of LLMs, and related tools, linked together to support functionality that can’t be provided by a single LLM alone.

Available nodes:

  • Basic LLM Chain
  • Retrieval Q&A Chain
  • Summarization Chain
  • Sentiment Analysis
  • Text Classifier

Learn more about chaining in LangChain.

An agent{ data-preview} has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next. Source

Available nodes:

  • Agent

Learn more about Agents in LangChain.

Vector stores store embedded data, and perform vector searches on it.

  • Simple Vector Store
  • PGVector Vector Store
  • Pinecone Vector Store
  • Qdrant Vector Store
  • Supabase Vector Store
  • Zep Vector Store

Learn more about Vector stores in LangChain.

Utility nodes.

LangChain Code: import LangChain. This means if there is functionality you need that Agentic WorkFlow hasn’t created a node for, you can still use it.

These nodes provide additional functionality and can be configured to work with browser-extracted content.

Document loaders add data to your chain as documents. In browser context, these work seamlessly with browser extension nodes to process web content.

Browser Integration Patterns:

  • Use browser extension nodes to extract web content, then process with document loaders
  • Combine text extraction nodes with document loaders for intelligent content processing
  • Process selected text or full page content as documents for AI analysis

Available nodes:

  • Default Document Loader - Process browser-extracted content as documents
  • GitHub Document Loader - Load GitHub content for AI processing

Learn more about Document loaders in LangChain.

LLMs (large language models) are programs that analyze datasets. They’re the key element of working with AI.

Available nodes:

  • Anthropic Chat Model
  • AWS Bedrock Chat Model
  • Cohere Model
  • Hugging Face Inference Model
  • Mistral Cloud Chat Model
  • Ollama Chat Model
  • Ollama Model
  • OpenAI Chat Model

Learn more about Language models in LangChain.

Memory retains information about previous queries in a series of queries. For example, when a user interacts with a chat model, it’s useful if your application can remember and call on the full conversation, not just the most recent query entered by the user.

Available nodes:

  • Motorhead
  • Redis Chat Memory
  • Postgres Chat Memory
  • Simple Memory
  • Xata
  • Zep

Learn more about Memory in LangChain.

Output parsers take the text generated by an LLM and format it to match the structure you require.

Available nodes:

  • Auto-fixing Output Parser
  • Item List Output Parser
  • Structured Output Parser

Learn more about Output parsers in LangChain.

  • Contextual Compression Retriever
  • MultiQuery Retriever
  • Vector Store Retriever
  • Workflow Retriever

Text splitters break down data (documents), making it easier for the LLM to process the information and return accurate results.

Available nodes:

  • Character Text Splitter
  • Recursive Character Text Splitter
  • Token Splitter

Agentic WorkFlow’s text splitter nodes implements parts of LangChain’s text_splitter API.

Utility tools.

  • Calculator
  • Code Tool
  • SerpAPI
  • Think Tool
  • Vector Store Tool
  • Wikipedia
  • Wolfram|Alpha
  • Workflow Tool

Embeddings capture the “relatedness” of text, images, video, or other types of information. (source)

Available nodes:

  • Embeddings AWS Bedrock
  • Embeddings Cohere
  • Embeddings Google PaLM
  • Embeddings Hugging Face Inference
  • Embeddings Mistral Cloud
  • Embeddings Ollama
  • Embeddings OpenAI

Learn more about Text embeddings in LangChain.

  • Chat Memory Manager