Skip to content

Rate Limits and Performance

What this covers: How to avoid hitting API limits and keep your workflows running fast and reliably.

Perfect for: API-heavy workflows • Large data processing • Performance optimization • Avoiding errors

Browser workflows can hit limits that cause failures:

  • API rate limits - Too many requests too fast
  • Browser memory limits - Processing too much data at once
  • Network timeouts - Requests taking too long
  • Extension limits - Browser restricting background processing

API Limits

  • Per minute: 60 requests per minute (1 per second)
  • Per day: 1000 requests per 24 hours
  • Concurrent: 5 requests at the same time
  • Data size: 10MB maximum per request

Browser Limits

  • Memory: ~2GB for extension processing
  • Background processing: Slower when tab is inactive
  • Network: 6 connections per domain
  • Storage: 10MB local storage quota

Error messages you’ll see:

  • “Too Many Requests” (429) - You’re going too fast
  • “Service Unavailable” (503) - API is overloaded
  • “Request timeout” - Taking too long to respond
  • “Out of memory” - Browser can’t handle the data size

Warning signs:

  • Workflows suddenly start failing
  • Requests taking much longer than usual
  • Browser becoming slow or unresponsive
  • Getting partial or empty responses

Use the Wait node to slow down:

{"amount": 2, "unit": "seconds"}

Result: 2-second pause between each request

  1. Filter your data into small groups (5-10 items)
  2. Process each group with delays
  3. Merge results back together

Configure HTTP Request for reliability:

{
"timeout": 30000, // 30-second timeout
"retryOnFail": true, // Retry failed requests
"maxRetries": 3, // Try up to 3 times
"retryDelay": 5000 // 5-second delay between retries
}
  • Process in batches - Send multiple items to AI at once
  • Use local models when possible to avoid API limits
  • Clear memory between large AI operations
  • Add 1-2 second delays between page requests
  • Keep active tab open for faster processing
  • Process smaller chunks of data at a time
  • Break large datasets into smaller pieces (1000 items max)
  • Use streaming for very large files
  • Cache repeated operations to save time
  • Respect API limits - check documentation for limits
  • Use exponential backoff - wait longer after each failure
  • Batch similar requests together when possible

Exponential Backoff (recommended)

  • Try 1: Wait 1 second
  • Try 2: Wait 2 seconds
  • Try 3: Wait 4 seconds
  • Try 4: Wait 8 seconds (max)

Simple Retry

{
"maxRetries": 3,
"retryDelay": 5000
}
  • Start slow - Begin with longer delays, speed up if needed
  • Cache results - Store data locally to avoid repeat requests
  • Monitor usage - Track how many requests you’re making
  • Plan for failures - Always have retry logic
  • Test with real data - Use realistic data sizes when testing
  • Rapid-fire requests - Don’t send requests as fast as possible
  • Ignoring errors - Always handle rate limit responses
  • Processing huge datasets at once
  • Running workflows in background without monitoring
  • Assuming APIs are always available

“Too Many Requests” errors → Add Wait nodes between requests (2-5 seconds)

Browser running slow/crashing → Process smaller batches of data (100-500 items max)

Workflows timing out → Increase timeout settings and add retry logic

Inconsistent results → Add delays and check for proper error handling

API quota exceeded → Spread requests across longer time periods or upgrade API plan

Related guides: HTTP Request NodeWait NodeError Handling

Workflow patterns: Data Processing PatternsAPI Integration PatternsPerformance Optimization

Learn more: Multi-Step WorkflowsWorkflow Debugging