ℹ️ Select 'Choose Exercise', or randomize 'Next Random Exercise' in selected language.

Choose Exercise:
Timer 00:00
WPM --
Score --
Acc --
Correct chars --

HTTP Request Rate Limiter Middleware

Kotlin

Goal -- WPM

Ready
Exercise Algorithm Area
1package com.example.http
2
3import java.util.concurrent.ConcurrentHashMap
4import java.util.concurrent.atomic.AtomicLong
5import java.util.concurrent.locks.ReentrantLock
6import kotlin.time.Duration.Companion.seconds
7
8// Represents a rate limiter for a specific client or endpoint.
9class RateLimiter(private val capacity: Int, private val refillRate: Int, private val timeWindowSeconds: Int) {
10
11private val tokens = ConcurrentHashMap<String, AtomicLong>()
12private val lastRefillTime = ConcurrentHashMap<String, AtomicLong>()
13private val lock = ReentrantLock()
14
15// Gets or creates a token count for a given key.
16private fun getTokenCount(key: String): AtomicLong {
17return tokens.computeIfAbsent(key) { AtomicLong(capacity.toLong()) }
18}
19
20// Gets or creates a last refill time for a given key.
21private fun getLastRefillTime(key: String): AtomicLong {
22return lastRefillTime.computeIfAbsent(key) { AtomicLong(System.currentTimeMillis()) }
23}
24
25// Refills tokens based on the refill rate and time elapsed.
26private fun refillTokens(key: String) {
27lock.lock()
28try {
29val currentTime = System.currentTimeMillis()
30val elapsedTimeMs = currentTime - getLastRefillTime(key).get()
31val tokensToRefill = (elapsedTimeMs / 1000.0 * refillRate).toInt().coerceAtMost(capacity - getTokenCount(key).get())
32
33if (tokensToRefill > 0) {
34getTokenCount(key).addAndGet(tokensToRefill.toLong())
35getLastRefillTime(key).set(currentTime)
36}
37} finally {
38lock.unlock()
39}
40}
41
42// Attempts to consume a token. Returns true if successful, false otherwise.
43fun tryConsume(key: String): Boolean {
44refillTokens(key)
45val currentTokens = getTokenCount(key)
46if (currentTokens.get() > 0) {
47return currentTokens.decrementAndGet() >= 0
48}
49return false
50}
51
52// Checks if a key has exceeded its rate limit.
53fun isRateLimited(key: String): Boolean {
54return !tryConsume(key)
55}
56}
57
58// Middleware to enforce rate limiting on incoming HTTP requests.
59class RateLimiterMiddleware(private val limiter: RateLimiter) {
60
61// Handles the request, applying rate limiting.
62fun handle(request: HttpRequest): HttpResponse {
63val clientIdentifier = request.headers["X-Client-ID"] ?: "anonymous"
64
65if (limiter.isRateLimited(clientIdentifier)) {
66return HttpResponse.tooManyRequests("Rate limit exceeded for client: $clientIdentifier")
67}
68
69// Proceed with the request if not rate limited
70return HttpResponse.ok("Request processed successfully.")
71}
72}
73
74// --- Mock HTTP Request/Response objects for demonstration ---
75data class HttpRequest(val method: String, val path: String, val headers: Map<String, String>)
76
77sealed class HttpResponse(val statusCode: Int, val body: String) {
78data class Ok(val responseBody: String) : HttpResponse(200, responseBody)
79data class TooManyRequests(val errorMessage: String) : HttpResponse(429, errorMessage)
80
81companion object {
82fun ok(body: String): HttpResponse = Ok(body)
83fun tooManyRequests(message: String): HttpResponse = TooManyRequests(message)
84}
85}
86
87fun main() {
88// Example Usage:
89val rateLimiter = RateLimiter(capacity = 10, refillRate = 5, timeWindowSeconds = 60)
90val middleware = RateLimiterMiddleware(rateLimiter)
91
92// Simulate requests
93val request1 = HttpRequest("GET", "/api/data", mapOf("X-Client-ID" to "user1"))
94val request2 = HttpRequest("GET", "/api/data", mapOf("X-Client-ID" to "user1"))
95val request3 = HttpRequest("GET", "/api/data", mapOf("X-Client-ID" to "user2"))
96
97println("Request 1: ${middleware.handle(request1)}")
98println("Request 2: ${middleware.handle(request2)}")
99println("Request 3: ${middleware.handle(request3)}")
100
101// Simulate burst requests to test capacity
102for (i in 1..12) {
103val req = HttpRequest("GET", "/api/data", mapOf("X-Client-ID" to "user1"))
104println("Burst Request ${i}: ${middleware.handle(req)}")
105}
106
107// Simulate requests after a delay to test refill
108Thread.sleep(2000)
109println("Request after delay: ${middleware.handle(request1)}")
110}
Algorithm description viewbox

HTTP Request Rate Limiter Middleware

Algorithm description:

This algorithm implements a token bucket rate limiter, a common pattern for controlling the rate of requests to a service. It ensures that a client cannot exceed a predefined number of requests within a given time window. This is crucial for preventing abuse, ensuring fair usage, and maintaining service stability in distributed systems.

Algorithm explanation:

The RateLimiter uses a token bucket algorithm. Each client (identified by a key) has a bucket with a certain capacity. Tokens are added to the bucket at a fixed refill rate. When a request arrives, a token is consumed. If the bucket is empty, the request is denied. The `refillTokens` function calculates how many tokens should be added based on the time elapsed since the last refill and the `refillRate`, ensuring tokens don't exceed `capacity`. The `tryConsume` method attempts to decrement a token and returns true if successful. Edge cases include handling concurrent access with `ConcurrentHashMap` and `ReentrantLock` to prevent race conditions, and ensuring token counts do not exceed capacity. The time complexity for `tryConsume` is O(1) on average, as map operations are typically amortized O(1). Space complexity is O(N) where N is the number of unique client identifiers being tracked.

Pseudocode:

RateLimiter:
  Initialize capacity, refillRate, timeWindowSeconds.
  Initialize tokens map (key -> AtomicLong).
  Initialize lastRefillTime map (key -> AtomicLong).
  Initialize a lock.

  Function tryConsume(key):
    Refill tokens for key.
    If current tokens > 0:
      Decrement tokens for key.
      Return true.
    Else:
      Return false.

  Function refillTokens(key):
    Acquire lock.
    Calculate elapsed time since last refill.
    Calculate tokens to refill based on elapsed time and refillRate, capped by capacity.
    If tokens to refill > 0:
      Add tokens to key's count.
      Update last refill time for key.
    Release lock.

Middleware:
  Initialize RateLimiter.

  Function handle(request):
    Get clientIdentifier from request headers.
    If RateLimiter.isRateLimited(clientIdentifier):
      Return TooManyRequests response.
    Else:
      Return Ok response.