Limitly
API Reference

checkRateLimit()

Checks if a request is allowed based on configured rate limits. Returns detailed information about the rate limit status.

client.checkRateLimit(options?)

Checks if a request is allowed based on the configured rate limits. This is the core method for rate limiting - it determines whether a request should be processed or rate limited.

Function Signature

function checkRateLimit(
  options?: RateLimitOptions | string
): Promise<LimitlyResponse>;

Parameters

options (optional)

Either a configuration object or a string identifier:

// As an object
await client.checkRateLimit({
  identifier: 'user-123',
  capacity: 100,
  refillRate: 10,
  skip: false,
});

// As a string (shorthand for identifier)
await client.checkRateLimit('user-123');

RateLimitOptions

interface RateLimitOptions {
  identifier?: string; // User ID, IP, or other unique identifier
  algorithm?:
    | 'token-bucket'
    | 'sliding-window'
    | 'fixed-window'
    | 'leaky-bucket'; // Override algorithm for this request
  capacity?: number; // Maximum capacity (for token bucket/leaky bucket, default: 100)
  refillRate?: number; // Tokens refilled per second (for token bucket, default: 10)
  limit?: number; // Maximum requests (for sliding/fixed window, default: 100)
  windowSize?: number; // Window size in milliseconds (for sliding/fixed window, default: 60000)
  leakRate?: number; // Leak rate per second (for leaky bucket, default: 10)
  skip?: boolean; // Skip rate limiting (default: false)
}

Returns

A Promise<LimitlyResponse> with the following structure:

interface LimitlyResponse {
  allowed: boolean; // true if request is allowed, false if rate limited
  limit?: number; // Total request capacity
  remaining?: number; // Number of requests remaining
  reset?: number; // Unix timestamp (milliseconds) when limit resets
  message?: string; // Optional error message if not allowed
}

Basic Usage

Check rate limit with just an identifier:

import { createClient } from 'limitly-sdk';

// Recommended: Use your own Redis
const client = createClient({
  redisUrl: process.env.REDIS_URL || 'redis://localhost:6379',
  serviceId: 'my-api',
});

// Simple check with identifier
const result = await client.checkRateLimit('user-123');

if (result.allowed) {
  console.log(`Request allowed. ${result.remaining} remaining.`);
} else {
  console.log('Rate limited!');
}

With Custom Limits

Override default limits for this specific check:

// Token bucket (default)
const result = await client.checkRateLimit({
  identifier: 'user-123',
  capacity: 50, // Maximum 50 requests
  refillRate: 5, // Refill 5 tokens per second
});

// Sliding window
const result = await client.checkRateLimit({
  identifier: 'user-123',
  algorithm: 'sliding-window',
  limit: 100, // 100 requests
  windowSize: 60000, // per 60 seconds
});

// Fixed window
const result = await client.checkRateLimit({
  identifier: 'user-123',
  algorithm: 'fixed-window',
  limit: 100,
  windowSize: 60000,
});

// Leaky bucket
const result = await client.checkRateLimit({
  identifier: 'user-123',
  algorithm: 'leaky-bucket',
  capacity: 100,
  leakRate: 10, // Leak 10 per second
});

Understanding the Response

allowed (boolean)

Indicates whether the request should be processed:

if (result.allowed) {
  // Process the request
  processRequest();
} else {
  // Return 429 Too Many Requests
  return rateLimitError();
}

limit (number, optional)

The total capacity for this rate limit bucket:

console.log(`User can make up to ${result.limit} requests`);

remaining (number, optional)

How many requests are still available:

if (result.remaining !== undefined) {
  console.log(`${result.remaining} requests remaining`);

  // Set HTTP header
  res.setHeader('X-RateLimit-Remaining', result.remaining.toString());
}

reset (number, optional)

Unix timestamp (milliseconds) when the bucket will be full again:

if (result.reset) {
  const resetDate = new Date(result.reset);
  console.log(`Limit resets at: ${resetDate.toISOString()}`);

  // Calculate retry after seconds
  const retryAfter = Math.ceil((result.reset - Date.now()) / 1000);
  res.setHeader('Retry-After', retryAfter.toString());
}

Skip Rate Limiting

Bypass rate limiting for specific cases (e.g., admins):

const result = await client.checkRateLimit({
  identifier: 'user-123',
  skip: user.isAdmin, // Admins bypass rate limits
});

// If skip is true, result.allowed will always be true

Per-Endpoint Limits

Use different limits for different endpoints:

async function checkEndpointLimit(userId: string, endpoint: string) {
  const endpointLimits: Record<
    string,
    { capacity: number; refillRate: number }
  > = {
    '/api/login': { capacity: 5, refillRate: 0.1 },
    '/api/search': { capacity: 100, refillRate: 10 },
    '/api/export': { capacity: 10, refillRate: 0.5 },
  };

  const limits = endpointLimits[endpoint] || { capacity: 50, refillRate: 5 };

  return await client.checkRateLimit({
    identifier: `${userId}:${endpoint}`,
    ...limits,
  });
}

Error Handling

Handle errors gracefully:

try {
  const result = await client.checkRateLimit({
    identifier: userId,
  });

  if (!result.allowed) {
    return handleRateLimit(result);
  }

  return processRequest();
} catch (error) {
  // Handle Redis connection errors, timeouts, etc.
  console.error('Rate limit check failed:', error);

  // Fail open - allow request if rate limiting fails
  return processRequest();
}

Setting HTTP Headers

Include rate limit information in response headers:

const result = await client.checkRateLimit({ identifier: userId });

// Set standard rate limit headers
if (result.limit) {
  res.setHeader('X-RateLimit-Limit', result.limit.toString());
}

if (result.remaining !== undefined) {
  res.setHeader('X-RateLimit-Remaining', result.remaining.toString());
}

if (result.reset) {
  res.setHeader('X-RateLimit-Reset', Math.ceil(result.reset / 1000).toString());
}

if (!result.allowed) {
  const retryAfter = result.reset
    ? Math.ceil((result.reset - Date.now()) / 1000)
    : 60;
  res.setHeader('Retry-After', retryAfter.toString());
  return res.status(429).json({ error: 'Rate limit exceeded' });
}

Performance Considerations

  • Caching: Results are cached in Redis for fast lookups
  • Async: Always use await - the method returns a Promise
  • Batching: Multiple checks can be done in parallel with Promise.all()
// Check multiple users in parallel
const results = await Promise.all([
  client.checkRateLimit('user-1'),
  client.checkRateLimit('user-2'),
  client.checkRateLimit('user-3'),
]);