Skip to main content
Rate limiting helps ensure platform stability, fairness, and consistent performance for all users.
DeBounce applies limits depending on the type of API key and the operation being performed.

Regular API Key

All requests made with a regular (private) API key are subject to concurrency limits.
These limits prevent overload and help ensure fast, reliable validation results.

Concurrent Request Limit

  • You may send up to 5 concurrent API calls at a time.
  • If the data enrichment option is enabled, the limit becomes 2 concurrent calls.
If your application exceeds this limit, the API returns:
HTTP/429 Too Many Requests
{
  "debounce": {
    "error": "Maximum concurrent calls reached",
    "code": "0"
  },
  "success": "0"
}

Recommendations for Developers

  • Use connection pooling or queueing to control concurrency
  • Retry failed requests with backoff timing
  • Cache results when possible
  • Avoid validating large volumes in parallel without a queue
These practices improve performance and help remain within platform limits.
Need more validation throughput? Contact our team to request upgraded rate limits.

Public API Key (Client-Side Use)

A public API key starts with public_ and is intended for JavaScript usage, such as real-time form validation widgets.

CORS Requirements

You must add your domain to the key’s approved CORS domain list.

Daily Per-IP Limit

To protect your account from abuse, each internet IP address may validate up to:
  • 20 emails per day
If a user exceeds this limit, the widget displays “You performed many validations”, and the API returns:
HTTP/429 Too Many Requests
{
  "debounce": {
    "error": "Authentication Failed - The maximum number of calls per day reached."
  },
  "success": "0"
}
You can check your current public IP address using:
WhatIsMyIP

Bulk Validation Limits

Bulk validation jobs are intentionally serialized to ensure system stability.

Bulk API Rules

  • Only one active bulk validation job is allowed per account
  • Additional uploads or API bulk jobs are queued automatically
  • When the active job finishes, the next queued job begins
This guarantees predictable processing and prevents overload for large lists.