PHPackages                             largerio/laravel-concurrent-limiter - PHPackages - PHPackages  [Skip to content](#main-content)[PHPackages](/)[Directory](/)[Categories](/categories)[Trending](/trending)[Leaderboard](/leaderboard)[Changelog](/changelog)[Analyze](/analyze)[Collections](/collections)[Log in](/login)[Sign up](/register)

1. [Directory](/)
2. /
3. [HTTP &amp; Networking](/categories/http)
4. /
5. largerio/laravel-concurrent-limiter

ActiveLibrary[HTTP &amp; Networking](/categories/http)

largerio/laravel-concurrent-limiter
===================================

LaravelConcurrentLimiter is a lightweight Laravel middleware that limits concurrent requests per user (or IP). It delays requests until a slot is available or returns an error after a maximum wait time, protecting your app from resource exhaustion and ensuring fair usage under high load.

v4.0.0(3mo ago)30[2 PRs](https://github.com/largerio/laravel-concurrent-limiter/pulls)MITPHPPHP 8.3.\*|8.4.\*CI passing

Since Jan 16Pushed 1mo ago1 watchersCompare

[ Source](https://github.com/largerio/laravel-concurrent-limiter)[ Packagist](https://packagist.org/packages/largerio/laravel-concurrent-limiter)[ Docs](https://github.com/largerio/laravel-concurrent-limiter)[ GitHub Sponsors](https://github.com/largerio)[ RSS](/packages/largerio-laravel-concurrent-limiter/feed)WikiDiscussions main Synced 1mo ago

READMEChangelog (9)Dependencies (12)Versions (12)Used By (0)

Laravel Concurrent Limiter
==========================

[](#laravel-concurrent-limiter)

[![Latest Version on Packagist](https://camo.githubusercontent.com/71fd9492a70621d7116ca784dfd3e5a39d21a45763b18382c482aa42aa7ad512/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f762f6c6172676572696f2f6c61726176656c2d636f6e63757272656e742d6c696d697465722e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/largerio/laravel-concurrent-limiter)[![Tests](https://camo.githubusercontent.com/a1660b84dae9cfd863370f7fd75a934eeaa48053b89c4deadb0ba77e148989ce/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f6c6172676572696f2f6c61726176656c2d636f6e63757272656e742d6c696d697465722f72756e2d74657374732e796d6c3f6272616e63683d6d61696e266c6162656c3d7465737473267374796c653d666c61742d737175617265)](https://github.com/largerio/laravel-concurrent-limiter/actions/workflows/run-tests.yml)[![PHPStan](https://camo.githubusercontent.com/c7d500a2b6e756fa569f161d976abac97fdd50d83ec3e1ab5bf489537064a7ef/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f6c6172676572696f2f6c61726176656c2d636f6e63757272656e742d6c696d697465722f7068707374616e2e796d6c3f6272616e63683d6d61696e266c6162656c3d7068707374616e267374796c653d666c61742d737175617265)](https://github.com/largerio/laravel-concurrent-limiter/actions/workflows/phpstan.yml)[![Total Downloads](https://camo.githubusercontent.com/49d78cb9f0ed595eb62fd1ba78ca7feca88cd63259fbee245e4a7a6accf06d5c/68747470733a2f2f696d672e736869656c64732e696f2f7061636b61676973742f64742f6c6172676572696f2f6c61726176656c2d636f6e63757272656e742d6c696d697465722e7376673f7374796c653d666c61742d737175617265)](https://packagist.org/packages/largerio/laravel-concurrent-limiter)

A Laravel middleware package that limits the number of **concurrent** requests per user (or IP). Unlike rate limiting which counts requests over time, this package controls how many requests can be processed **simultaneously**.

Features
--------

[](#features)

- **HTTP Middleware** - Limit concurrent requests per user/IP with automatic queuing
- **Job Middleware** - Limit concurrent queue job execution to protect external APIs
- **Adaptive Limiting** - Auto-adjust limits based on latency using AIMD algorithm
- **Prometheus Metrics** - Built-in `/metrics` endpoint for monitoring
- **Fail-safe** - Configurable behavior when cache is unavailable
- **Events** - Full request lifecycle tracking (wait, acquire, release, exceed)
- **Extensible** - Custom key resolvers and response handlers

Requirements
------------

[](#requirements)

- PHP 8.3 or higher
- Laravel 11.x or 12.x
- Cache store with atomic operations (Redis recommended)

Table of Contents
-----------------

[](#table-of-contents)

- [Quick Start](#quick-start)
- [Installation](#installation)
- [HTTP Middleware](#http-middleware)
- [Job Middleware](#job-middleware)
- [Configuration](#configuration)
- [Adaptive Limiting](#adaptive-limiting)
- [Events](#events)
- [Custom Key Resolver](#custom-key-resolver)
- [Custom Response Handler](#custom-response-handler)
- [Prometheus Metrics](#prometheus-metrics)
- [Cache](#cache)
- [Artisan Commands](#artisan-commands)
- [Troubleshooting](#troubleshooting)
- [License](#license)

Quick Start
-----------

[](#quick-start)

```
// routes/api.php
Route::middleware('concurrent.limit:5,30')->group(function () {
    Route::get('/heavy-endpoint', HeavyController::class);
});
```

This limits each user to **5 concurrent requests**, waiting up to **30 seconds** for a slot before returning 503.

Installation
------------

[](#installation)

Install via Composer:

```
composer require largerio/laravel-concurrent-limiter
```

The service provider is auto-discovered. To publish the config file:

```
php artisan vendor:publish --provider="Largerio\LaravelConcurrentLimiter\LaravelConcurrentLimiterServiceProvider" --tag="config"
```

HTTP Middleware
---------------

[](#http-middleware)

Apply the middleware to routes using the `concurrent.limit` alias:

```
use Illuminate\Support\Facades\Route;

// Parameters: maxParallel, maxWaitTime, prefix
Route::middleware('concurrent.limit:10,30,api')->group(function () {
    Route::get('/data', [DataController::class, 'index']);
});
```

Or use the static helper:

```
use Largerio\LaravelConcurrentLimiter\LaravelConcurrentLimiter;

Route::middleware(LaravelConcurrentLimiter::with(10, 30, 'api'))->group(function () {
    // ...
});
```

**How it works:**

1. Generates a unique key based on user ID (or IP if unauthenticated)
2. Increments a counter in cache
3. If over limit, waits (polling every 100ms) until a slot is free
4. If timeout reached, returns 503 with JSON error
5. After processing, decrements the counter

Job Middleware
--------------

[](#job-middleware)

Limit concurrent execution of queued jobs to protect external APIs or shared resources:

```
use Largerio\LaravelConcurrentLimiter\JobConcurrentLimiter;

class ProcessPayment implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable;

    public function middleware(): array
    {
        return [
            new JobConcurrentLimiter(
                maxParallel: 5,        // Max 5 concurrent jobs
                key: 'stripe-api',     // Shared key for all Stripe jobs
                releaseAfter: 30,      // Retry after 30 seconds if limited
                shouldRelease: true    // Auto-release job back to queue
            ),
        ];
    }

    public function handle(): void
    {
        // Process payment with Stripe...
    }
}
```

**Parameters:**

ParameterTypeDefaultDescription`maxParallel`int5Maximum concurrent jobs`key`string'default'Identifier for grouping jobs`releaseAfter`int30Seconds before retrying`shouldRelease`booltrueRelease job back to queue if limited**Use Cases:**

- Limit API calls to third-party services (Stripe, Twilio, etc.)
- Prevent database overload from batch processing
- Control concurrent file processing or exports

Configuration
-------------

[](#configuration)

OptionDefaultDescription`max_parallel`10Maximum concurrent requests per user`max_wait_time`30Seconds to wait before returning 503`ttl_buffer`60Extra TTL seconds for cache safety`cache_prefix``concurrent-limiter:`Cache key prefix`cache_store`nullCache store (null = default)`error_message`"Too many concurrent..."503 response message`retry_after`trueInclude Retry-After header`key_resolver`nullCustom KeyResolver class`response_handler`nullCustom ResponseHandler class`on_cache_failure`'allow'Behavior on cache failure: 'allow' or 'reject'`logging.enabled`falseLog when limits are exceeded`logging.channel`nullLog channel (null = default)`logging.level`'warning'Log level`metrics.enabled`falseEnable Prometheus metrics endpoint`metrics.route`'/concurrent-limiter/metrics'Metrics endpoint path`metrics.middleware`\[\]Middleware for metrics endpoint`adaptive.enabled`falseEnable adaptive concurrency limiting`adaptive.algorithm`'vegas'Algorithm: 'vegas' or 'gradient2'`adaptive.min_limit`1Minimum concurrency limit`adaptive.max_limit`100Maximum concurrency limit`adaptive.ewma_alpha`0.3EWMA smoothing factor (Vegas)`adaptive.sample_window`60Metrics TTL in seconds`adaptive.min_rtt_reset_samples`1000Reset minRTT after N samples (Vegas)`adaptive.rtt_tolerance`2.0Acceptable latency multiplier (Gradient2)Adaptive Limiting
-----------------

[](#adaptive-limiting)

Automatically adjust `maxParallel` based on observed response latency using algorithms inspired by Netflix's concurrency-limits library.

### Available Algorithms

[](#available-algorithms)

**Vegas (default)** - Based on TCP Vegas congestion control:

- Tracks minimum RTT (best-case latency) as baseline
- Compares current latency to baseline to detect queueing
- Uses dynamic alpha/beta thresholds based on current limit
- Best for: Server-side protection, proactive congestion detection

**Gradient2** - Based on EWMA divergence:

- Tracks short-term and long-term EWMA
- Detects latency trends by comparing the two averages
- Configurable tolerance for latency increase
- Best for: Detecting gradual degradation, noisy environments

### Enable Adaptive Limiting

[](#enable-adaptive-limiting)

```
// config/concurrent-limiter.php
'adaptive' => [
    'enabled' => true,
    'algorithm' => 'vegas',         // 'vegas' or 'gradient2'
    'min_limit' => 1,               // Never go below this
    'max_limit' => 100,             // Never exceed this
    'ewma_alpha' => 0.3,            // EWMA smoothing (Vegas)
    'sample_window' => 60,          // Metrics TTL in seconds
    'min_rtt_reset_samples' => 1000, // Reset minRTT after N samples (Vegas)
    'rtt_tolerance' => 2.0,         // Acceptable latency multiplier (Gradient2)
],
```

### How Adaptive Interacts with maxParallel

[](#how-adaptive-interacts-with-maxparallel)

When adaptive limiting is enabled, the `maxParallel` parameter from your route acts as a **hard cap**:

```
// Route configuration
Route::middleware('concurrent.limit:10,30,api')->group(...);
//                              ↑
//                              maxParallel = 10 (hard cap)

// Adaptive can only REDUCE the limit, never exceed maxParallel
// Effective limit = min(maxParallel, adaptiveLimit)
```

ScenariomaxParallelAdaptive calculatesEffective limitGood latency1015**10** (capped)High latency103**3** (reduced)No metrics yet1010**10** (initial)This ensures that adaptive limiting is a **safety optimization** - it can reduce load when latency degrades, but never allows more concurrent requests than you explicitly configured.

### Vegas Algorithm Details

[](#vegas-algorithm-details)

**Formula:**

```
gradient = minRTT / avgRTT
queueUse = limit × (1 - gradient)

alpha = max(1, 10% of limit)
beta = max(2, 20% of limit)

if queueUse < alpha → limit++     (room to grow)
if queueUse > beta → limit--      (too much queueing)
else → stable                     (sweet spot)

```

**Example:** With limit=10, minRTT=100ms, avgRTT=100ms:

- gradient = 1.0, queueUse = 0
- 0 &lt; alpha (1) → increase to 11

### Gradient2 Algorithm Details

[](#gradient2-algorithm-details)

**Formula:**

```
gradient = longEWMA / shortEWMA

if gradient >= 1.02 → limit++     (clearly improving, with 2% hysteresis)
if gradient < 1/tolerance → limit-- (degrading beyond tolerance)
else → stable                     (within tolerance)

```

**Example:** With tolerance=2.0, shortEWMA=200ms, longEWMA=100ms:

- gradient = 0.5, threshold = 0.5
- 0.5 &gt;= 0.5 → stable (just within tolerance)

### Use Cases

[](#use-cases)

- **Auto-scaling protection**: Automatically reduce concurrency when backend is overloaded
- **Variable workloads**: Handle traffic spikes without manual tuning
- **Proactive detection**: Vegas detects congestion before timeouts occur

### Monitoring

[](#monitoring)

Access metrics programmatically:

```
use Largerio\LaravelConcurrentLimiter\Contracts\AdaptiveResolver;

$resolver = app(AdaptiveResolver::class);
$metrics = $resolver->getMetrics('concurrent-limiter:api:user123');

// Vegas metrics:
// ['avg_latency_ms' => 245.5, 'min_latency_ms' => 100.0, 'current_limit' => 12, ...]

// Gradient2 metrics:
// ['short_ewma_ms' => 200.0, 'long_ewma_ms' => 150.0, 'current_limit' => 12, ...]
```

Events
------

[](#events)

The middleware dispatches events for monitoring and logging:

EventWhenProperties`ConcurrentLimitWaitStarted`Request starts waiting for a slot`$request`, `$currentCount`, `$maxParallel`, `$key``ConcurrentLimitAcquired`Request acquires a slot`$request`, `$waitedSeconds`, `$key``ConcurrentLimitExceeded`Timeout reached, returning 503`$request`, `$waitedSeconds`, `$maxParallel`, `$key``ConcurrentLimitReleased`Request completed`$request`, `$totalTime`, `$key``CacheOperationFailed`Cache operation fails`$request` (nullable), `$exception`Example listener:

```
use Largerio\LaravelConcurrentLimiter\Events\ConcurrentLimitExceeded;

class LogConcurrentLimitExceeded
{
    public function handle(ConcurrentLimitExceeded $event): void
    {
        Log::warning('Concurrent limit exceeded', [
            'key' => $event->key,
            'waited_seconds' => $event->waitedSeconds,
            'url' => $event->request->fullUrl(),
        ]);
    }
}
```

Custom Key Resolver
-------------------

[](#custom-key-resolver)

By default, the middleware uses the authenticated user ID or IP address. Implement `KeyResolver` to customize:

```
namespace App\Limiters;

use Illuminate\Http\Request;
use Largerio\LaravelConcurrentLimiter\Contracts\KeyResolver;

class TenantKeyResolver implements KeyResolver
{
    public function resolve(Request $request): string
    {
        $tenantId = $request->header('X-Tenant-ID') ?? 'default';
        $userId = $request->user()?->id ?? $request->ip();

        return sha1($tenantId . ':' . $userId);
    }
}
```

Register in config:

```
// config/concurrent-limiter.php
'key_resolver' => App\Limiters\TenantKeyResolver::class,
```

Custom Response Handler
-----------------------

[](#custom-response-handler)

Customize the 503 response by implementing `ResponseHandler`:

```
namespace App\Limiters;

use Illuminate\Http\Request;
use Largerio\LaravelConcurrentLimiter\Contracts\ResponseHandler;
use Symfony\Component\HttpFoundation\Response;

class HtmlResponseHandler implements ResponseHandler
{
    public function handle(Request $request, float $waitedSeconds, int $maxWaitTime): Response
    {
        return response()->view('errors.503-concurrent', [
            'waited' => $waitedSeconds,
            'maxWait' => $maxWaitTime,
        ], 503)->header('Retry-After', (string) $maxWaitTime);
    }
}
```

Register in config:

```
// config/concurrent-limiter.php
'response_handler' => App\Limiters\HtmlResponseHandler::class,
```

Prometheus Metrics
------------------

[](#prometheus-metrics)

Enable Prometheus-compatible metrics for monitoring:

```
// config/concurrent-limiter.php
'metrics' => [
    'enabled' => true,
    'route' => '/concurrent-limiter/metrics',
    'middleware' => ['auth:api'],
],
```

**Available Metrics:**

MetricTypeDescription`concurrent_limiter_requests_total`CounterTotal requests processed`concurrent_limiter_exceeded_total`CounterRequests rejected (503)`concurrent_limiter_cache_failures_total`CounterCache operation failures`concurrent_limiter_wait_seconds`HistogramTime spent waiting for slots**Example Output:**

```
# HELP concurrent_limiter_requests_total Total number of requests processed
# TYPE concurrent_limiter_requests_total counter
concurrent_limiter_requests_total{key="all"} 1523

# HELP concurrent_limiter_exceeded_total Total number of requests rejected (503)
# TYPE concurrent_limiter_exceeded_total counter
concurrent_limiter_exceeded_total{key="all"} 42

# HELP concurrent_limiter_wait_seconds Time spent waiting for a slot
# TYPE concurrent_limiter_wait_seconds histogram
concurrent_limiter_wait_seconds_bucket{le="0.1"} 1200
concurrent_limiter_wait_seconds_bucket{le="1"} 1450
concurrent_limiter_wait_seconds_bucket{le="+Inf"} 1523
concurrent_limiter_wait_seconds_sum 156.234
concurrent_limiter_wait_seconds_count 1523

```

**Grafana Tips:**

- Alert on `rate(concurrent_limiter_exceeded_total[5m]) > 10`
- Monitor p99 wait time with histogram quantiles
- Track cache failures for infrastructure issues

Cache
-----

[](#cache)

### Store Recommendations

[](#store-recommendations)

The middleware requires a cache store that supports atomic operations:

Cache StoreProduction ReadyNotes**Redis**YesBest choice. Supports locks for atomic operations.**Memcached**YesGood alternative to Redis.**DynamoDB**YesWorks with Laravel DynamoDB cache driver.**Database**LimitedWorks but may cause contention under high load.**File**NoNo locking support. Race conditions possible.**Array**NoOnly for testing. Data lost between requests.Configure in `config/concurrent-limiter.php`:

```
'cache_store' => 'redis', // or null to use default
```

### Key Structure

[](#key-structure)

ContextPatternExampleHTTP requests`{prefix}{custom_prefix}{user_id|ip_hash}``concurrent-limiter:api:abc123`Job queue`{prefix}job:{key}``concurrent-limiter:job:stripe-api`Locks`{key}:lock``concurrent-limiter:api:abc123:lock`### Failure Handling

[](#failure-handling)

By default, if cache is unavailable, requests are allowed through (fail-open). For critical endpoints:

```
// config/concurrent-limiter.php
'on_cache_failure' => 'reject', // Return 503 if cache is unavailable
```

ModeBehaviorUse Case`allow`Let requests throughGeneral APIs, non-critical endpoints`reject`Return 503 errorPayment processing, rate-sensitive operationsArtisan Commands
----------------

[](#artisan-commands)

### Check Counter Status

[](#check-counter-status)

```
php artisan concurrent-limiter:status {key}

# Example output
Key: concurrent-limiter:abc123...
Current count: 3
Max parallel: 10
Status: 3/10 slots in use
```

### Clear Stuck Counters

[](#clear-stuck-counters)

```
php artisan concurrent-limiter:clear {key} [--force]

# With confirmation
php artisan concurrent-limiter:clear abc123

# Skip confirmation
php artisan concurrent-limiter:clear abc123 --force
```

Troubleshooting
---------------

[](#troubleshooting)

### Always getting 503 errors

[](#always-getting-503-errors)

1. **Check `maxParallel` setting** - It might be too low for your traffic
2. **Verify cache is working** - Test with `Cache::put('test', 1); Cache::get('test');`
3. **Check for stuck counters** - They expire after `maxWaitTime + ttl_buffer` seconds

### Requests not being limited

[](#requests-not-being-limited)

1. **Verify middleware is applied** - Run `php artisan route:list`
2. **Check cache store** - `array` driver doesn't persist between requests
3. **Different users/IPs** - Each user/IP has their own limit

### Performance issues

[](#performance-issues)

1. **Use Redis** - Fastest option with proper locking support
2. **Reduce `maxWaitTime`** - Lower wait times free up resources faster
3. **Tune `maxParallel`** - Balance between protection and throughput

### Debugging

[](#debugging)

Enable logging to see when limits are exceeded:

```
'logging' => [
    'enabled' => true,
    'channel' => null,
    'level' => 'warning',
],
```

Changelog
---------

[](#changelog)

See [CHANGELOG.md](CHANGELOG.md) for release history.

License
-------

[](#license)

This package is open-sourced software licensed under the [MIT license](LICENSE).

###  Health Score

42

—

FairBetter than 90% of packages

Maintenance86

Actively maintained with recent releases

Popularity4

Limited adoption so far

Community11

Small or concentrated contributor base

Maturity58

Maturing project, gaining track record

 Bus Factor1

Top contributor holds 78% of commits — single point of failure

How is this calculated?**Maintenance (25%)** — Last commit recency, latest release date, and issue-to-star ratio. Uses a 2-year decay window.

**Popularity (30%)** — Total and monthly downloads, GitHub stars, and forks. Logarithmic scaling prevents top-heavy scores.

**Community (15%)** — Contributors, dependents, forks, watchers, and maintainers. Measures real ecosystem engagement.

**Maturity (30%)** — Project age, version count, PHP version support, and release stability.

###  Release Activity

Cadence

Every ~0 days

Total

9

Last Release

115d ago

Major Versions

v1.3.0 → v2.0.02026-01-16

v2.1.0 → v3.0.02026-01-16

v3.0.0 → v4.0.02026-01-17

PHP version history (2 changes)v1.0.0PHP 8.2.\*|8.3.\*|8.4.\*

v1.1.0PHP 8.3.\*|8.4.\*

### Community

Maintainers

![](https://www.gravatar.com/avatar/132083fcc44d2f1fce6457f561aada49bac62473dd2f7d92d0660da0204dc56a?d=identicon)[louisbels](/maintainers/louisbels)

---

Top Contributors

[![louisbels](https://avatars.githubusercontent.com/u/1326252?v=4)](https://github.com/louisbels "louisbels (32 commits)")[![dependabot[bot]](https://avatars.githubusercontent.com/in/29110?v=4)](https://github.com/dependabot[bot] "dependabot[bot] (5 commits)")[![github-actions[bot]](https://avatars.githubusercontent.com/in/15368?v=4)](https://github.com/github-actions[bot] "github-actions[bot] (3 commits)")[![claude](https://avatars.githubusercontent.com/u/81847?v=4)](https://github.com/claude "claude (1 commits)")

---

Tags

middlewarelaravelconcurrentrate limitinglargeriolaravel-concurrent-limiter

###  Code Quality

TestsPest

Static AnalysisPHPStan

Code StyleLaravel Pint

### Embed Badge

![Health badge](/badges/largerio-laravel-concurrent-limiter/health.svg)

```
[![Health](https://phpackages.com/badges/largerio-laravel-concurrent-limiter/health.svg)](https://phpackages.com/packages/largerio-laravel-concurrent-limiter)
```

###  Alternatives

[sunchayn/nimbus

A Laravel package providing an in-browser API client with automatic schema generation, live validation, and built-in authentication with a touch of Laravel-tailored magic for effortless API testing.

29428.0k](/packages/sunchayn-nimbus)[muhammadhuzaifa/telescope-guzzle-watcher

Telescope Guzzle Watcher provide a custom watcher for intercepting http requests made via guzzlehttp/guzzle php library. The package uses the on\_stats request option for extracting the request/response data. The watcher intercept and log the request into the Laravel Telescope HTTP Client Watcher.

98239.8k1](/packages/muhammadhuzaifa-telescope-guzzle-watcher)[vormkracht10/laravel-mails

Laravel Mails can collect everything you might want to track about the mails that has been sent by your Laravel app.

24149.7k](/packages/vormkracht10-laravel-mails)[tomschlick/request-migrations

HTTP Request Migrations

1844.5k](/packages/tomschlick-request-migrations)[pdphilip/cf-request

Cloudflare Laravel Request

2725.6k1](/packages/pdphilip-cf-request)[api-platform/laravel

API Platform support for Laravel

59126.4k6](/packages/api-platform-laravel)

PHPackages © 2026

[Directory](/)[Categories](/categories)[Trending](/trending)[Changelog](/changelog)[Analyze](/analyze)
