createClient()
Creates a new Limitly client instance for advanced configurations and custom rate limiting setups.
createClient(config?)
Creates a new Limitly client instance with custom configuration. For production, always provide redisUrl for full tenant isolation.
Function Signature
function createClient(config?: LimitlyConfig): LimitlyClientParameters
config (optional)
Configuration object for the client:
interface LimitlyConfig {
redisUrl?: string; // ⭐ Recommended for production. Redis connection URL for direct Redis mode
serviceId?: string; // Service identifier for isolation
timeout?: number; // Request timeout in milliseconds (default: 5000)
baseUrl?: string; // Base URL of the Limitly API service (default: https://api.limitly.emmanueltaiwo.dev). Only used when redisUrl is not provided.
}Important:
- With
redisUrl: SDK connects directly to your Redis. Full tenant isolation, no collisions with other users. Recommended for production. - Without
redisUrl: Uses HTTP API mode (hosted service). Shares Redis with other users - may collide if multiple users use the sameserviceId. Good for development/testing.
Returns
A LimitlyClient instance with the following methods:
checkRateLimit(options?)- Check if a request is allowed- Other client methods (see API reference)
Basic Usage
Recommended: Use your own Redis
import { createClient } from 'limitly-sdk';
// Recommended for production
const client = createClient({
redisUrl: process.env.REDIS_URL || 'redis://localhost:6379',
serviceId: 'my-app'
});Without Redis URL (development/testing):
// ⚠️ Shares hosted Redis - may collide with other users
const client = createClient({ serviceId: 'my-app' });Custom Service ID
Isolate rate limits by service or application:
// Recommended: Use with your own Redis
const client = createClient({
redisUrl: process.env.REDIS_URL,
serviceId: 'my-api-service'
});
// All rate limits using this client will be isolated under 'my-api-service'
const result = await client.checkRateLimit({
identifier: 'user-123'
});This is useful when you have multiple services and want to keep their rate limits separate:
// API service
const apiClient = createClient({ serviceId: 'api-service' });
// Authentication service
const authClient = createClient({ serviceId: 'auth-service' });
// Background job service
const jobClient = createClient({ serviceId: 'job-service' });
// Each service has independent rate limit buckets
await apiClient.checkRateLimit({ identifier: 'user-123' });
await authClient.checkRateLimit({ identifier: 'user-123' });
await jobClient.checkRateLimit({ identifier: 'user-123' });Bring Your Own Redis (Recommended for Production)
Always use your own Redis URL for production deployments to ensure full tenant isolation:
// Recommended for production
const client = createClient({
redisUrl: process.env.REDIS_URL || 'redis://localhost:6379',
serviceId: 'my-app'
});
// All rate limit data stored in your Redis - no collisions
const result = await client.checkRateLimit('user-123');Benefits of using your own Redis:
- ✅ Full tenant isolation - No collisions with other Limitly users
- ✅ Data privacy - Your rate limit data stays in your Redis instance
- ✅ Better performance - Direct Redis connection (no HTTP overhead)
- ✅ Production ready - Recommended for all production deployments
Without redisUrl (HTTP API mode):
- ⚠️ Shares hosted Redis with other users
- ⚠️ Potential collisions if multiple users use the same
serviceId - ✅ Works out of the box with zero configuration
- ✅ Good for development and testing
Custom Timeout
Set a custom timeout for HTTP requests:
const client = createClient({
serviceId: 'my-app',
timeout: 3000 // 3 seconds timeout
});This is useful when:
- You want faster failure detection
- Your network has higher latency
- You're using a remote API service
Environment-Based Configuration
Create different clients for different environments:
// config/limitly.ts
import { createClient } from 'limitly-sdk';
const isProduction = process.env.NODE_ENV === 'production';
// Production: Always use your own Redis (recommended)
export const prodClient = createClient({
redisUrl: process.env.REDIS_URL!, // Required for production
serviceId: process.env.SERVICE_ID || 'production',
timeout: parseInt(process.env.TIMEOUT || '5000', 10)
});
// Development: use local Redis
export const devClient = createClient({
redisUrl: 'redis://localhost:6379',
serviceId: 'dev',
timeout: 5000
});
// ⚠️ Not recommended for production: HTTP API mode (shares hosted Redis)
export const apiClient = createClient({
serviceId: 'my-app'
// No redisUrl = uses HTTP API, may collide with other users
});Multiple Clients
Create multiple clients for different use cases:
// Strict rate limiting for authentication
const authClient = createClient({
serviceId: 'auth',
timeout: 2000
});
// Lenient rate limiting for public APIs
const publicClient = createClient({
serviceId: 'public-api',
timeout: 5000
});
// Background job rate limiting
const jobClient = createClient({
serviceId: 'background-jobs',
timeout: 10000
});Error Handling
Handle connection errors gracefully:
import { createClient } from 'limitly-sdk';
const client = createClient({
serviceId: 'my-app',
redisUrl: process.env.REDIS_URL
});
async function checkLimitSafely(userId: string) {
try {
const result = await client.checkRateLimit({ identifier: userId });
return result;
} catch (error) {
// Handle Redis connection errors
if (error instanceof Error) {
console.error('Rate limit check failed:', error.message);
}
// Fail open - allow request if rate limiting fails
return {
allowed: true,
error: 'Rate limit service unavailable'
};
}
}Best Practices
- Use service IDs: Always specify a
serviceIdto isolate rate limits - Connection pooling: Limitly handles connection pooling automatically
- Singleton pattern: Create clients once and reuse them:
// lib/limitly.ts
import { createClient } from 'limitly-sdk';
let client: ReturnType<typeof createClient> | null = null;
export function getLimitlyClient() {
if (!client) {
// Recommended: Always provide redisUrl for production
client = createClient({
redisUrl: process.env.REDIS_URL, // Required for production
serviceId: process.env.SERVICE_ID || 'default'
});
}
return client;
}Custom Rate Limit Strategies
Implement custom rate limiting strategies for different use cases. Learn patterns for per-endpoint limits, adaptive limits, and tier-based systems.
checkRateLimit()
Checks if a request is allowed based on configured rate limits. Returns detailed information about the rate limit status.