Skip to content

Performance & Speed Issues

Slow workflows can be frustrating and may cause timeouts or browser crashes. This guide helps you identify performance bottlenecks and optimize workflow execution speed.

Try these first:

  • 🚀 Close unnecessary browser tabs - Reduces memory pressure
  • 🚀 Restart your browser - Clears memory leaks and resets performance
  • 🚀 Update your browser - Latest versions have performance improvements
  • 🚀 Disable other extensions - Reduces resource competition
  • 🚀 Check available RAM - Ensure sufficient system memory

Symptoms:

  • Workflows take minutes to extract simple data
  • Browser becomes unresponsive during extraction
  • “Page unresponsive” warnings appear

Performance comparison:

Content SizeExtraction TimeMemory UsageRecommended Action
< 1MB< 2 secondsLowNo optimization needed
1-5MB2-10 secondsMediumUse CSS selectors
5-20MB10-30 secondsHighExtract in chunks
> 20MB> 30 secondsVery HighLimit extraction scope

Optimization strategies:

// Instead of extracting all content
const allText = document.body.innerText; // Slow for large pages
// Use targeted selectors
const specificContent = document.querySelectorAll('.article-content p');
const limitedContent = Array.from(specificContent)
.slice(0, 50) // Limit to first 50 paragraphs
.map(p => p.textContent);

Problem: Running many extraction nodes simultaneously

Solution - Sequential Processing:

graph LR
    A[Page Load] --> B[Extract Headers]
    B --> C[Extract Content]
    C --> D[Extract Links]
    D --> E[Process Results]

    style A fill:#e3f2fd
    style B fill:#e8f5e8
    style C fill:#e8f5e8
    style D fill:#e8f5e8
    style E fill:#f3e5f5

Optimization techniques:

ApproachSpeedMemoryBest For
Single extractionFastestLowestSimple data
Batched extractionMediumMediumMultiple similar elements
Parallel extractionVariableHighestIndependent data sources
Streaming extractionConsistentLowLarge datasets

Common causes:

  • Large input text sent to AI models
  • Complex prompts requiring extensive processing
  • Multiple AI calls in sequence
  • Using slower AI models

Optimization strategies:

ProblemCauseSolution
Large text inputSending entire page contentSummarize or chunk text first
Complex promptsOverly detailed instructionsSimplify and focus prompts
Sequential AI callsWaiting for each responseBatch similar requests
Slow model choiceUsing most capable but slowest modelUse faster models for simple tasks

Text chunking example:

// Instead of processing entire article
const fullArticle = document.querySelector('.article').textContent; // 10,000+ words
// Chunk into manageable pieces
function chunkText(text, maxWords = 500) {
const words = text.split(' ');
const chunks = [];
for (let i = 0; i < words.length; i += maxWords) {
chunks.push(words.slice(i, i + maxWords).join(' '));
}
return chunks;
}
const chunks = chunkText(fullArticle, 300); // Process 300 words at a time

Symptoms:

  • Browser becomes sluggish during workflow execution
  • “Out of memory” errors
  • System becomes unresponsive

Memory monitoring:

// Check memory usage (Chrome only)
if (performance.memory) {
const memInfo = {
used: Math.round(performance.memory.usedJSHeapSize / 1024 / 1024) + ' MB',
total: Math.round(performance.memory.totalJSHeapSize / 1024 / 1024) + ' MB',
limit: Math.round(performance.memory.jsHeapSizeLimit / 1024 / 1024) + ' MB'
};
console.log('Memory usage:', memInfo);
}

Memory optimization:

TechniqueImpactImplementation
Clear variablesMediumSet large objects to null after use
Limit DOM queriesHighCache DOM elements, avoid repeated queries
Process in batchesHighBreak large operations into smaller chunks
Use streamingVery HighProcess data as it arrives, don’t store all

Symptoms:

  • High CPU usage during workflow execution
  • Browser UI becomes unresponsive
  • Fan noise increases significantly

CPU optimization techniques:

// Use requestAnimationFrame for heavy processing
function processLargeDataset(data, callback) {
let index = 0;
const batchSize = 100;
function processBatch() {
const endIndex = Math.min(index + batchSize, data.length);
// Process batch
for (let i = index; i < endIndex; i++) {
// Process data[i]
}
index = endIndex;
if (index < data.length) {
// Continue processing in next frame
requestAnimationFrame(processBatch);
} else {
// Processing complete
callback();
}
}
processBatch();
}

Add timing measurements:

// Measure individual node performance
const nodeTimings = {};
function measureNode(nodeName, operation) {
const startTime = performance.now();
const result = operation();
const endTime = performance.now();
nodeTimings[nodeName] = endTime - startTime;
console.log(`${nodeName} took ${endTime - startTime}ms`);
return result;
}
// Usage example
const extractedData = measureNode('ContentExtraction', () => {
return document.querySelectorAll('.content p');
});

Performance API usage:

// Monitor overall workflow performance
performance.mark('workflow-start');
// ... workflow execution ...
performance.mark('workflow-end');
performance.measure('workflow-duration', 'workflow-start', 'workflow-end');
// Get measurements
const measures = performance.getEntriesByType('measure');
measures.forEach(measure => {
console.log(`${measure.name}: ${measure.duration}ms`);
});

Monitor resource consumption:

// Track resource usage over time
const resourceMonitor = {
startTime: Date.now(),
measurements: [],
measure() {
const now = Date.now();
const measurement = {
timestamp: now - this.startTime,
memory: performance.memory ? {
used: performance.memory.usedJSHeapSize,
total: performance.memory.totalJSHeapSize
} : null,
timing: performance.now()
};
this.measurements.push(measurement);
return measurement;
},
report() {
console.table(this.measurements);
}
};
// Use during workflow
resourceMonitor.measure(); // Before workflow
// ... workflow execution ...
resourceMonitor.measure(); // After workflow
resourceMonitor.report();

Efficient selectors:

// Slow - searches entire document
const slowQuery = document.querySelectorAll('*');
// Fast - specific and targeted
const fastQuery = document.querySelectorAll('.article-content p');
// Faster - use IDs when available
const fastestQuery = document.getElementById('main-content');

Selector performance ranking:

  1. 🟢 ID selectors (#content) - Fastest
  2. 🟡 Class selectors (.article) - Fast
  3. 🟡 Tag selectors (p, div) - Medium
  4. 🟠 Attribute selectors ([data-id]) - Slower
  5. 🔴 Complex selectors (div > p:nth-child(2)) - Slowest

Process data in chunks:

// Instead of processing all at once
function processAllData(data) {
return data.map(item => expensiveOperation(item)); // Blocks browser
}
// Process in batches with delays
async function processBatched(data, batchSize = 50) {
const results = [];
for (let i = 0; i < data.length; i += batchSize) {
const batch = data.slice(i, i + batchSize);
const batchResults = batch.map(item => expensiveOperation(item));
results.push(...batchResults);
// Allow browser to update UI
await new Promise(resolve => setTimeout(resolve, 10));
}
return results;
}

Efficient prompts:

// Slow - overly complex prompt
const slowPrompt = `
Please analyze this text in great detail, considering all possible interpretations,
cultural contexts, linguistic nuances, and provide a comprehensive analysis
including sentiment, themes, key points, and recommendations...
`;
// Fast - focused and specific
const fastPrompt = `
Extract the main topic and sentiment (positive/negative/neutral) from this text:
`;

Choose appropriate models:

Task TypeRecommended ModelSpeedAccuracy
Simple classificationFast/Small modelsVery FastGood
Text summarizationMedium modelsFastVery Good
Complex analysisLarge modelsSlowExcellent
Code generationSpecialized modelsMediumExcellent

Sequential (slower but safer):

graph LR
    A[Extract Text] --> B[Process Text]
    B --> C[Analyze Sentiment]
    C --> D[Generate Summary]
    D --> E[Save Results]

Parallel (faster but more complex):

graph TB
    A[Extract Text] --> B[Process Text]
    B --> C[Analyze Sentiment]
    B --> D[Generate Summary]
    B --> E[Extract Keywords]
    C --> F[Combine Results]
    D --> F
    E --> F
    F --> G[Save Results]

Implement smart caching:

// Cache expensive operations
const cache = new Map();
function cachedOperation(input) {
const cacheKey = JSON.stringify(input);
if (cache.has(cacheKey)) {
console.log('Cache hit');
return cache.get(cacheKey);
}
console.log('Cache miss - computing...');
const result = expensiveOperation(input);
cache.set(cacheKey, result);
return result;
}

Acceptable performance ranges:

Operation TypeTarget TimeWarning ThresholdAction Required
Simple extraction< 2 seconds> 5 secondsOptimize selectors
Complex extraction< 10 seconds> 30 secondsReduce scope
AI processing< 15 seconds> 60 secondsOptimize prompts
Data transformation< 5 seconds> 20 secondsBatch processing
File operations< 3 seconds> 10 secondsCheck file size

Create performance tests:

// Performance test suite
const performanceTests = {
async testExtraction() {
const start = performance.now();
const data = document.querySelectorAll('.test-content');
const end = performance.now();
return {
operation: 'Content Extraction',
duration: end - start,
dataSize: data.length,
passed: (end - start) < 2000 // 2 second threshold
};
},
async testProcessing(data) {
const start = performance.now();
const processed = data.map(item => item.textContent.trim());
const end = performance.now();
return {
operation: 'Data Processing',
duration: end - start,
itemCount: processed.length,
passed: (end - start) < 5000 // 5 second threshold
};
}
};
// Run performance tests
async function runPerformanceTests() {
const results = [];
results.push(await performanceTests.testExtraction());
// Add more tests...
console.table(results);
return results;
}

Watch for these signs:

  • ⚠️ Extraction taking > 10 seconds - Content too large or selectors inefficient
  • ⚠️ Memory usage > 500MB - Potential memory leak or excessive data storage
  • ⚠️ CPU usage > 80% - Processing too intensive, needs optimization
  • ⚠️ Browser becoming unresponsive - Operations blocking UI thread
  • ⚠️ Frequent timeouts - Network issues or processing bottlenecks

Common degradation causes:

PatternCauseSolution
Gradual slowdownMemory leaksRestart browser, optimize memory usage
Sudden performance dropResource exhaustionReduce batch sizes, add delays
Inconsistent timingNetwork variabilityAdd retry logic, optimize requests
Progressive failureAccumulating errorsImplement error recovery, reset state

Monthly checklist:

Systematic optimization process:

  1. Identify bottlenecks - Use timing and profiling tools
  2. Prioritize improvements - Focus on biggest impact first
  3. Implement optimizations - Make targeted changes
  4. Test and validate - Ensure improvements work
  5. Monitor results - Track performance over time

Design principles:

  • 🎯 Be specific - Use targeted selectors and focused operations
  • 🔄 Process incrementally - Break large operations into smaller chunks
  • 💾 Cache intelligently - Store expensive computation results
  • Optimize early - Consider performance from the start
  • 📊 Measure everything - Track performance metrics consistently