Skip to content

Models and dependencies

AI workflows in Agentic WorkFlow are built by connecting agent nodes to dependency nodes. Dependency nodes do not usually perform the final task alone. They provide capabilities that an agent or chain uses.

flowchart TD
  Agent["AI agent or chain"] --> LLM["Chat LLM"]
  Agent --> Memory["Chat memory"]
  Agent --> Tools["Tools"]
  Agent --> Parser["Output parser"]
  RAG["RAG / Indexer"] --> Splitter["Text splitter"]
  RAG --> Embeddings["Embeddings"]
  RAG --> Store["Vector store"]
  Store --> Embeddings

  style Agent fill:#e1f5fe,stroke:#0277bd
  style RAG fill:#fff3e0,stroke:#ef6c00
DependencyWhat it providesExample nodes
Chat LLMText reasoning and generationChat OpenAI, Chat Anthropic, Chat Google, Ollama, Web LLM, Chrome AI
EmbeddingsConverts text to vectorsOpenAI Embeddings, Ollama Embeddings
Vector storeStores and searches embedded contentLocal Knowledge
Text splitterBreaks large text into chunksCharacter Text Splitter, Recursive Character Text Splitter
Chat memoryKeeps conversation stateLocal Memory
Output parserMakes model output structuredStructured Output Parser
ToolGives an agent an action it can callWikipedia Query

Choose the model based on the job, not only on raw capability.

NeedPrefer
Fast local experimentationWeb LLM, Chrome AI, Ollama
Strong general reasoningChat OpenAI, Chat Anthropic, Chat Google
Private local workflowsOllama or browser-local models
Source-grounded searchEmbeddings + vector store + RAG
Reliable downstream automationChat model + structured output parser

Keep dependencies close to the agent that uses them. A workflow is easier to debug when each AI step has an explicit model, explicit memory, explicit tools, and explicit parser.

flowchart LR
  Text["Extracted page text"] --> Chain["Basic LLM Chain"]
  Model["Chat OpenAI"] --> Chain
  Parser["Structured Output Parser"] --> Chain
  Chain --> Rows["Clean JSON rows"]
  • Use one primary chat model per AI step unless you have a clear reason to compare models.
  • Add memory only when previous conversation turns should affect later answers.
  • Add tools only when the agent must choose actions.
  • Add a parser when another node will consume the model output.
  • Keep local/browser models for privacy-sensitive or offline-friendly workflows, but test quality on your actual tasks.